How to Read the Fundamental Equations of Evolutionary Change in Terms of Information Theory

Scientific study of digital information

Data theory is the scientific written report of the quantification, storage, and communication of digital information.[one] The field was fundamentally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s.[2] : vii The field is at the intersection of probability theory, statistics, informatics, statistical mechanics, information engineering, and electric engineering.

A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For case, identifying the effect of a fair coin flip (with two equally probable outcomes) provides less information (lower entropy) than specifying the result from a roll of a dice (with half-dozen as likely outcomes). Another important measures in information theory are mutual information, channel capacity, fault exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security.

Applications of fundamental topics of data theory include source coding/data pinch (e.g. for Cipher files), and channel coding/error detection and correction (e.k. for DSL). Its bear on has been crucial to the success of the Voyager missions to deep infinite, the invention of the meaty disc, the feasibility of mobile phones and the development of the Cyberspace. The theory has also constitute applications in other areas, including statistical inference,[3] cryptography, neurobiology,[4] perception,[5] linguistics, the development[half dozen] and function[7] of molecular codes (bioinformatics), thermal physics,[8] molecular dynamics,[9] quantum computing, blackness holes, information retrieval, intelligence gathering, plagiarism detection,[10] pattern recognition, anomaly detection[11] and fifty-fifty art cosmos.

Overview [edit]

Information theory studies the transmission, processing, extraction, and utilization of data. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of every bit a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of mistake, in spite of the channel racket. Shannon'due south main effect, the noisy-channel coding theorem showed that, in the limit of many channel uses, the charge per unit of information that is asymptotically achievable is equal to the aqueduct capacity, a quantity dependent merely on the statistics of the channel over which the letters are sent.[4]

Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes tin can be roughly subdivided into data compression (source coding) and fault-correction (aqueduct coding) techniques. In the latter case, it took many years to find the methods Shannon's piece of work proved were possible.

A tertiary class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. Come across the commodity ban (unit) for a historical application.

Historical background [edit]

The landmark event establishing the subject field of data theory and bringing it to immediate worldwide attention was the publication of Claude East. Shannon's classic newspaper "A Mathematical Theory of Advice" in the Bell System Technical Journal in July and Oct 1948.

Prior to this newspaper, limited data-theoretic ideas had been adult at Bell Labs, all implicitly bold events of equal probability. Harry Nyquist'south 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted past a communication system, giving the relation W = M log chiliad (recalling Boltzmann'due south abiding), where W is the speed of transmission of intelligence, m is the number of different voltage levels to cull from at each time step, and K is a constant. Ralph Hartley's 1928 newspaper, Manual of Information, uses the word information as a measurable quantity, reflecting the receiver'south ability to distinguish ane sequence of symbols from any other, thus quantifying data as H = log South due north = n log S , where S was the number of possible symbols, and northward the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor equally a unit or calibration or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical assay of the breaking of the German second world war Enigma ciphers.

Much of the mathematics behind information theory with events of different probabilities were adult for the field of thermodynamics past Ludwig Boltzmann and J. Willard Gibbs. Connections between data-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and data theory.

In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bong Labs by the end of 1944, Shannon for the outset fourth dimension introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion:

"The central problem of communication is that of reproducing at one indicate, either exactly or approximately, a message selected at some other betoken."

With it came the ideas of

  • the data entropy and redundancy of a source, and its relevance through the source coding theorem;
  • the mutual information, and the aqueduct capacity of a noisy channel, including the promise of perfect loss-complimentary communication given by the noisy-channel coding theorem;
  • the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; likewise as
  • the bit—a new manner of seeing the most fundamental unit of data.

Quantities of information [edit]

Information theory is based on probability theory and statistics. Information theory oft concerns itself with measures of data of the distributions associated with random variables. Of import quantities of data are entropy, a measure of information in a single random variable, and mutual information, a measure of information in common between two random variables. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated past independent samples with the given distribution tin can be reliably compressed. The latter is a property of the joint distribution of ii random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long cake lengths, when the channel statistics are determined by the joint distribution.

The choice of logarithmic base of operations in the post-obit formulae determines the unit of information entropy that is used. A common unit of information is the chip, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm.

In what follows, an expression of the form p log p is considered by convention to be equal to goose egg whenever p = 0. This is justified considering lim p 0 + p log p = 0 {\displaystyle \lim _{p\rightarrow 0+}p\log p=0} for whatsoever logarithmic base.

Entropy of an information source [edit]

Based on the probability mass office of each source symbol to be communicated, the Shannon entropy H , in units of bits (per symbol), is given by

H = i p i log two ( p i ) {\displaystyle H=-\sum _{i}p_{i}\log _{2}(p_{i})}

where pi is the probability of occurrence of the i -th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his award. Entropy is as well commonly computed using the natural logarithm (base e, where eastward is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by fugitive the need to include extra constants in the formulas. Other bases are as well possible, only less commonly used. For instance, a logarithm of base iiviii = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.

Intuitively, the entropy HX of a discrete random variable X is a measure out of the corporeality of incertitude associated with the value of 10 when but its distribution is known.

The entropy of a source that emits a sequence of Northward symbols that are independent and identically distributed (iid) is NH bits (per message of N symbols). If the source data symbols are identically distributed merely non independent, the entropy of a message of length N volition be less than NH .

The entropy of a Bernoulli trial as a function of success probability, ofttimes called the binary entropy office, H b(p). The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, equally in an unbiased money toss.

If one transmits k bits (0s and 1s), and the value of each of these $.25 is known to the receiver (has a specific value with certainty) alee of transmission, it is clear that no information is transmitted. If, even so, each scrap is independently equally likely to be 0 or ane, 1000 shannons of data (more than frequently chosen bits) have been transmitted. Betwixt these two extremes, data can exist quantified as follows. If X {\displaystyle \mathbb {X} } is the set of all messages {10 1, ..., 10 n } that X could be, and p(ten) is the probability of some x X {\displaystyle x\in \mathbb {X} } , and then the entropy, H , of 10 is defined:[12]

H ( X ) = E X [ I ( x ) ] = 10 X p ( x ) log p ( x ) . {\displaystyle H(X)=\mathbb {Eastward} _{Ten}[I(x)]=-\sum _{ten\in \mathbb {X} }p(x)\log p(x).}

(Here, I(x) is the self-data, which is the entropy contribution of an individual bulletin, and East Ten {\displaystyle \mathbb {E} _{X}} is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1/n ; i.e., most unpredictable, in which case H(X) = log n .

The special instance of information entropy for a random variable with two outcomes is the binary entropy function, normally taken to the logarithmic base 2, thus having the shannon (Sh) as unit:

H b ( p ) = p log two p ( i p ) log 2 ( ane p ) . {\displaystyle H_{\mathrm {b} }(p)=-p\log _{two}p-(1-p)\log _{2}(i-p).}

Joint entropy [edit]

The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: (X, Y). This implies that if 10 and Y are independent, then their joint entropy is the sum of their individual entropies.

For case, if (10, Y) represents the position of a chess piece— X the row and Y the column, then the joint entropy of the row of the piece and the column of the piece volition be the entropy of the position of the piece.

H ( Ten , Y ) = E Ten , Y [ log p ( x , y ) ] = x , y p ( x , y ) log p ( x , y ) {\displaystyle H(X,Y)=\mathbb {Due east} _{X,Y}[-\log p(x,y)]=-\sum _{10,y}p(x,y)\log p(x,y)\,}

Despite like notation, joint entropy should not be confused with cross entropy.

Conditional entropy (equivocation) [edit]

The conditional entropy or conditional uncertainty of 10 given random variable Y (also chosen the equivocation of Ten about Y ) is the average conditional entropy over Y :[xiii]

H ( X | Y ) = E Y [ H ( 10 | y ) ] = y Y p ( y ) 10 X p ( x | y ) log p ( 10 | y ) = ten , y p ( ten , y ) log p ( 10 | y ) . {\displaystyle H(X|Y)=\mathbb {Eastward} _{Y}[H(X|y)]=-\sum _{y\in Y}p(y)\sum _{x\in X}p(x|y)\log p(x|y)=-\sum _{x,y}p(x,y)\log p(x|y).}

Considering entropy tin be conditioned on a random variable or on that random variable being a certain value, care should be taken non to confuse these two definitions of conditional entropy, the erstwhile of which is in more common apply. A bones belongings of this class of conditional entropy is that:

H ( X | Y ) = H ( X , Y ) H ( Y ) . {\displaystyle H(X|Y)=H(X,Y)-H(Y).\,}

Common data (transinformation) [edit]

Mutual information measures the amount of information that can be obtained most one random variable by observing another. It is important in advice where information technology tin can be used to maximize the amount of information shared between sent and received signals. The mutual information of X relative to Y is given by:

I ( 10 ; Y ) = E X , Y [ S I ( x , y ) ] = 10 , y p ( x , y ) log p ( ten , y ) p ( 10 ) p ( y ) {\displaystyle I(X;Y)=\mathbb {East} _{10,Y}[SI(10,y)]=\sum _{ten,y}p(x,y)\log {\frac {p(10,y)}{p(ten)\,p(y)}}}

where SI (Specific common Information) is the pointwise mutual information.

A basic property of the mutual information is that

I ( 10 ; Y ) = H ( Ten ) H ( Ten | Y ) . {\displaystyle I(Ten;Y)=H(X)-H(10|Y).\,}

That is, knowing Y, nosotros can salve an average of I(X; Y) bits in encoding X compared to not knowing Y.

Mutual data is symmetric:

I ( X ; Y ) = I ( Y ; X ) = H ( 10 ) + H ( Y ) H ( X , Y ) . {\displaystyle I(X;Y)=I(Y;X)=H(X)+H(Y)-H(X,Y).\,}

Common information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X:

I ( 10 ; Y ) = E p ( y ) [ D K L ( p ( X | Y = y ) p ( X ) ) ] . {\displaystyle I(Ten;Y)=\mathbb {E} _{p(y)}[D_{\mathrm {KL} }(p(X|Y=y)\|p(X))].}

In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the departure from the product of the marginal distributions to the actual joint distribution:

I ( X ; Y ) = D K L ( p ( X , Y ) p ( 10 ) p ( Y ) ) . {\displaystyle I(X;Y)=D_{\mathrm {KL} }(p(X,Y)\|p(Ten)p(Y)).}

Common data is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson'southward χ2 exam: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.

Kullback–Leibler departure (data gain) [edit]

The Kullback–Leibler divergence (or information difference, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution p ( X ) {\displaystyle p(X)} , and an arbitrary probability distribution q ( X ) {\displaystyle q(X)} . If we compress data in a manner that assumes q ( X ) {\displaystyle q(10)} is the distribution underlying some data, when, in reality, p ( X ) {\displaystyle p(X)} is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined

D Chiliad 50 ( p ( X ) q ( X ) ) = x X p ( x ) log q ( x ) 10 X p ( 10 ) log p ( ten ) = 10 10 p ( x ) log p ( x ) q ( x ) . {\displaystyle D_{\mathrm {KL} }(p(Ten)\|q(Ten))=\sum _{x\in X}-p(ten)\log {q(x)}\,-\,\sum _{x\in X}-p(x)\log {p(10)}=\sum _{x\in X}p(x)\log {\frac {p(x)}{q(x)}}.}

Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric).

Another interpretation of the KL departure is the "unnecessary surprise" introduced past a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution p ( x ) {\displaystyle p(x)} . If Alice knows the true distribution p ( x ) {\displaystyle p(x)} , while Bob believes (has a prior) that the distribution is q ( x ) {\displaystyle q(x)} , and then Bob volition be more surprised than Alice, on average, upon seeing the value of X. The KL departure is the (objective) expected value of Bob'southward (subjective) surprisal minus Alice'due south surprisal, measured in $.25 if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" tin be quantified in terms of how "unnecessarily surprised" it is expected to brand him.

Other quantities [edit]

Other important data theoretic quantities include Rényi entropy (a generalization of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional common information.

Coding theory [edit]

A moving picture showing scratches on the readable surface of a CD-R. Music and data CDs are coded using error correcting codes and thus can still be read even if they take modest scratches using mistake detection and correction.

Coding theory is one of the most important and directly applications of data theory. It can be subdivided into source coding theory and aqueduct coding theory. Using a statistical description for data, information theory quantifies the number of $.25 needed to describe the data, which is the data entropy of the source.

  • Data compression (source coding): At that place are ii formulations for the compression trouble:
    • lossless data compression: the data must be reconstructed exactly;
    • lossy data compression: allocates $.25 needed to reconstruct the data, within a specified fidelity level measured by a distortion part. This subset of information theory is called rate–distortion theory.
  • Mistake-correcting codes (channel coding): While information compression removes every bit much back-up as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the information efficiently and faithfully across a noisy channel.

This division of coding theory into compression and transmission is justified past the information transmission theorems, or source–channel separation theorems that justify the apply of bits every bit the universal currency for information in many contexts. However, these theorems only concord in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than ane receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more full general networks, pinch followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.

Source theory [edit]

Any process that generates successive messages can be considered a source of information. A memoryless source is i in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own correct outside data theory.

Rate [edit]

Data charge per unit is the average entropy per symbol. For memoryless sources, this is but the entropy of each symbol, while, in the case of a stationary stochastic process, it is

r = lim n H ( Ten n | Ten n one , X n 2 , Ten n three , ) ; {\displaystyle r=\lim _{n\to \infty }H(X_{north}|X_{n-i},X_{north-two},X_{n-iii},\ldots );}

that is, the conditional entropy of a symbol given all the previous symbols generated. For the more full general case of a process that is non necessarily stationary, the average rate is

r = lim n 1 due north H ( X 1 , X two , X n ) ; {\displaystyle r=\lim _{northward\to \infty }{\frac {1}{n}}H(X_{1},X_{two},\dots X_{n});}

that is, the limit of the joint entropy per symbol. For stationary sources, these 2 expressions give the same result.[14]

Information rate is divers as

r = lim n 1 n I ( X 1 , 10 2 , X n ; Y 1 , Y 2 , Y n ) ; {\displaystyle r=\lim _{due north\to \infty }{\frac {1}{n}}I(X_{ane},X_{two},\dots X_{due north};Y_{1},Y_{2},\dots Y_{northward});}

Information technology is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for case, when the source of data is English language prose. The rate of a source of information is related to its back-up and how well it tin can be compressed, the subject of source coding.

Aqueduct capacity [edit]

Communications over a aqueduct is the principal motivation of data theory. However, channels often fail to produce verbal reconstruction of a signal; noise, periods of silence, and other forms of signal corruption oftentimes degrade quality.

Consider the communications process over a discrete channel. A simple model of the procedure is shown below:

Message W Encoder f north E north c o d east d s east q u e n c e X n Channel p ( y | x ) R due east c due east i v due east d southward e q u e n c e Y northward Decoder one thousand northward E s t i m a t eastward d m e s s a thousand e W ^ {\displaystyle {\xrightarrow[{\text{Bulletin}}]{W}}{\begin{array}{|c| }\hline {\text{Encoder}}\\f_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Encoded \atop sequence} }]{X^{n}}}{\begin{array}{|c| }\hline {\text{Channel}}\\p(y|x)\\\hline \cease{assortment}}{\xrightarrow[{\mathrm {Received \atop sequence} }]{Y^{north}}}{\begin{array}{|c| }\hline {\text{Decoder}}\\g_{due north}\\\hline \end{array}}{\xrightarrow[{\mathrm {Estimated \atop message} }]{\lid {West}}}}

Here Ten represents the space of messages transmitted, and Y the space of letters received during a unit time over our aqueduct. Permit p(y|x) be the conditional probability distribution office of Y given X. We will consider p(y|ten) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). So the joint distribution of 10 and Y is completely determined past our channel and by our choice of f(x), the marginal distribution of messages we choose to send over the channel. Nether these constraints, we would like to maximize the rate of data, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by:

C = max f I ( X ; Y ) . {\displaystyle C=\max _{f}I(10;Y).\!}

This capacity has the following property related to communicating at information rate R (where R is ordinarily bits per symbol). For whatsoever information charge per unit R < C and coding error ε > 0, for large enough Due north, in that location exists a code of length N and charge per unit ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is ever possible to transmit with arbitrarily small block error. In improver, for any rate R > C, it is impossible to transmit with arbitrarily small block error.

Aqueduct coding is concerned with finding such nearly optimal codes that can exist used to transmit data over a noisy channel with a pocket-sized coding mistake at a rate near the aqueduct chapters.

Capacity of particular channel models [edit]

  • A continuous-time analog communications channel subject to Gaussian noise—run into Shannon–Hartley theorem.
  • A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input flake with probability p. The BSC has a chapters of 1 − H b(p) $.25 per channel use, where H b is the binary entropy function to the base-2 logarithm:
Binary symmetric channel.svg
  • A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents consummate loss of information almost an input chip. The capacity of the BEC is 1 − p bits per channel use.
Binary erasure channel.svg

Channels with retentivity and directed information [edit]

In practice many channels have memory. Namely, at time i {\displaystyle i} the aqueduct is given past the conditional probability P ( y i | x i , x i one , 10 one 2 , . . . , ten ane , y i i , y 1 2 , . . . , y i ) . {\displaystyle P(y_{i}|x_{i},x_{i-1},x_{1-two},...,x_{1},y_{i-ane},y_{1-ii},...,y_{1}).} . It is ofttimes more comfortable to employ the notation x i = ( 10 i , x i ane , x 1 2 , . . . , 10 1 ) {\displaystyle x^{i}=(x_{i},x_{i-1},x_{i-two},...,x_{1})} and the channel become P ( y i | ten i , y i 1 ) . {\displaystyle P(y_{i}|x^{i},y^{i-i}).} . In such a instance the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not [15] [xvi] (if there is no feedback the directed informationj equals the mutual information).

Applications to other fields [edit]

Intelligence uses and secrecy applications [edit]

Information theoretic concepts apply to cryptography and cryptanalysis. Turing'south information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War Ii in Europe. Shannon himself defined an of import concept now chosen the unicity distance. Based on the back-up of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability.

Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can intermission systems based on disproportionate central algorithms or on well-nigh usually used methods of symmetric central algorithms (sometimes called undercover primal algorithms), such as cake ciphers. The security of all such methods currently comes from the assumption that no known attack can break them in a practical amount of time.

Information theoretic security refers to methods such as the one-fourth dimension pad that are not vulnerable to such animate being force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the central) can ensure proper transmission, while the unconditional common information betwixt the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not exist able to ameliorate his or her estimate of the plaintext by gaining cognition of the ciphertext only not of the fundamental. However, as in whatsoever other cryptographic arrangement, care must exist used to correctly apply fifty-fifty information-theoretically secure methods; the Venona project was able to crack the one-fourth dimension pads of the Soviet Union due to their improper reuse of key cloth.

Pseudorandom number generation [edit]

Pseudorandom number generators are widely available in figurer linguistic communication libraries and application programs. They are, almost universally, unsuited to cryptographic use equally they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, only even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is non necessarily satisfactory for use in an extractor and so for cryptography uses.

Seismic exploration [edit]

One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and split up the unwanted dissonance from the desired seismic signal. Data theory and digital point processing offer a major comeback of resolution and epitome clarity over previous analog methods.[17]

Semiotics [edit]

Semioticians Doede Nauta and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics.[eighteen] : 171 [19] : 137 Nauta defined semiotic data theory as the study of "the internal processes of coding, filtering, and information processing."[xviii] : 91

Concepts from data theory such as back-up and code command have been used by semioticians such equally Umberto Eco and Ferruccio Rossi-Landi to explain ideology as a grade of message transmission whereby a dominant social class emits its bulletin by using signs that exhibit a loftier degree of redundancy such that only one message is decoded among a choice of competing ones.[twenty]

Miscellaneous applications [edit]

Information theory also has applications in Gambling and data theory, blackness holes, and bioinformatics.

Encounter likewise [edit]

  • Algorithmic probability
  • Bayesian inference
  • Communication theory
  • Constructor theory - a generalization of information theory that includes quantum data
  • Formal science
  • Anterior probability
  • Info-metrics
  • Minimum message length
  • Minimum description length
  • List of important publications
  • Philosophy of data

Applications [edit]

  • Agile networking
  • Cryptanalysis
  • Cryptography
  • Cybernetics
  • Entropy in thermodynamics and information theory
  • Gambling
  • Intelligence (information gathering)
  • Seismic exploration

History [edit]

  • Hartley, R.Five.50.
  • History of information theory
  • Shannon, C.E.
  • Timeline of information theory
  • Yockey, H.P.

Theory [edit]

  • Coding theory
  • Detection theory
  • Estimation theory
  • Fisher data
  • Information algebra
  • Data asymmetry
  • Data field theory
  • Information geometry
  • Data theory and mensurate theory
  • Kolmogorov complexity
  • Listing of unsolved bug in information theory
  • Logic of information
  • Network coding
  • Philosophy of information
  • Quantum information science
  • Source coding

Concepts [edit]

  • Ban (unit)
  • Channel capacity
  • Advice aqueduct
  • Advice source
  • Conditional entropy
  • Covert channel
  • Data compression
  • Decoder
  • Differential entropy
  • Fungible information
  • Information fluctuation complexity
  • Data entropy
  • Articulation entropy
  • Kullback–Leibler divergence
  • Mutual information
  • Pointwise mutual information (PMI)
  • Receiver (information theory)
  • Back-up
  • Rényi entropy
  • Self-data
  • Unicity distance
  • Variety
  • Hamming distance

References [edit]

  1. ^ "Claude Shannon, pioneered digital information theory". FierceTelecom . Retrieved 2021-04-30 .
  2. ^ Shannon, Claude Elwood (1998). The mathematical theory of communication. Warren Weaver. Urbana: University of Illinois Press. ISBN0-252-72546-eight. OCLC 40716662.
  3. ^ Burnham, Thou. P. and Anderson D. R. (2002) Model Choice and Multimodel Inference: A Practical Information-Theoretic Arroyo, Second Edition (Springer Science, New York) ISBN 978-0-387-95364-nine.
  4. ^ a b F. Rieke; D. Warland; R Ruyter van Steveninck; Due west Bialek (1997). Spikes: Exploring the Neural Code. The MIT press. ISBN978-0262681087.
  5. ^ Delgado-Bonal, Alfonso; Martín-Torres, Javier (2016-11-03). "Human vision is determined based on information theory". Scientific Reports. vi (1): 36038. Bibcode:2016NatSR...636038D. doi:x.1038/srep36038. ISSN 2045-2322. PMC5093619. PMID 27808236.
  6. ^ cf; Huelsenbeck, J. P.; Ronquist, F.; Nielsen, R.; Bollback, J. P. (2001). "Bayesian inference of phylogeny and its touch on on evolutionary biology". Science. 294 (5550): 2310–2314. Bibcode:2001Sci...294.2310H. doi:10.1126/scientific discipline.1065889. PMID 11743192. S2CID 2138288.
  7. ^ Allikmets, Rando; Wasserman, Wyeth W.; Hutchinson, Amy; Smallwood, Philip; Nathans, Jeremy; Rogan, Peter K. (1998). "Thomas D. Schneider], Michael Dean (1998) Organization of the ABCR gene: assay of promoter and splice junction sequences". Gene. 215 (1): 111–122. doi:x.1016/s0378-1119(98)00269-8. PMID 9666097.
  8. ^ Jaynes, E. T. (1957). "Information Theory and Statistical Mechanics". Phys. Rev. 106 (4): 620. Bibcode:1957PhRv..106..620J. doi:10.1103/physrev.106.620.
  9. ^ Talaat, Khaled; Cowen, Benjamin; Anderoglu, Osman (2020-10-05). "Method of information entropy for convergence assessment of molecular dynamics simulations". Journal of Applied Physics. 128 (13): 135102. Bibcode:2020JAP...128m5102T. doi:x.1063/5.0019078. S2CID 225010720.
  10. ^ Bennett, Charles H.; Li, Ming; Ma, Bin (2003). "Chain Letters and Evolutionary Histories". Scientific American. 288 (6): 76–81. Bibcode:2003SciAm.288f..76B. doi:10.1038/scientificamerican0603-76. PMID 12764940. Archived from the original on 2007-10-07. Retrieved 2008-03-11 .
  11. ^ David R. Anderson (November 1, 2003). "Some background on why people in the empirical sciences may want to better sympathise the information-theoretic methods" (PDF). Archived from the original (PDF) on July 23, 2011. Retrieved 2010-06-23 .
  12. ^ Fazlollah K. Reza (1994) [1961]. An Introduction to Information Theory. Dover Publications, Inc., New York. ISBN0-486-68210-2.
  13. ^ Robert B. Ash (1990) [1965]. Information Theory. Dover Publications, Inc. ISBN0-486-66521-6.
  14. ^ Jerry D. Gibson (1998). Digital Compression for Multimedia: Principles and Standards. Morgan Kaufmann. ISBN1-55860-369-7.
  15. ^ Massey, James L. (1990). "Causality, Feedback And Directed Information". CiteSeerXten.1.i.36.5688.
  16. ^ Permuter, Haim Henry; Weissman, Tsachy; Goldsmith, Andrea J. (Feb 2009). "Finite Country Channels With Time-Invariant Deterministic Feedback". IEEE Transactions on Information Theory. 55 (ii): 644–662. arXiv:cs/0608070. doi:10.1109/TIT.2008.2009849. S2CID 13178.
  17. ^ Haggerty, Patrick E. (1981). "The corporation and innovation". Strategic Management Journal. 2 (2): 97–118. doi:x.1002/smj.4250020202.
  18. ^ a b Nauta, Doede (1972). The Meaning of Information. The Hague: Mouton. ISBN9789027919960.
  19. ^ Nöth, Winfried (January 2012). "Charles S. Peirce'due south theory of information: a theory of the growth of symbols and of knowledge". Cybernetics and Human Knowing. 19 (1–2): 137–161.
  20. ^ Nöth, Winfried (1981). "Semiotics of ideology". Semiotica, Outcome 148.

Further reading [edit]

The classic work [edit]

  • Shannon, C.E. (1948), "A Mathematical Theory of Advice", Bell System Technical Journal, 27, pp. 379–423 & 623–656, July & October, 1948. PDF.
    Notes and other formats.
  • R.V.L. Hartley, "Transmission of Information", Bell System Technical Journal, July 1928
  • Andrey Kolmogorov (1968), "3 approaches to the quantitative definition of data" in International Periodical of Computer Mathematics.

Other journal articles [edit]

  • J. 50. Kelly, Jr., Princeton, "A New Interpretation of Information Charge per unit" Bong Organization Technical Journal, Vol. 35, July 1956, pp. 917–26.
  • R. Landauer, IEEE.org, "Information is Concrete" Proc. Workshop on Physics and Computation PhysComp'92 (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1–4.
  • Landauer, R. (1961). "Irreversibility and Estrus Generation in the Calculating Process" (PDF). IBM J. Res. Dev. 5 (3): 183–191. doi:ten.1147/rd.53.0183.
  • Timme, Nicholas; Alford, Wesley; Flecker, Benjamin; Beggs, John M. (2012). "Multivariate information measures: an experimentalist's perspective". arXiv:1111.6857 [cs.IT].

Textbooks on information theory [edit]

  • Arndt, C. Information Measures, Information and its Description in Science and Engineering (Springer Serial: Signals and Communication Applied science), 2004, ISBN 978-three-540-40855-0
  • Ash, RB. Information Theory. New York: Interscience, 1965. ISBN 0-470-03445-9. New York: Dover 1990. ISBN 0-486-66521-6
  • Gallager, R. Information Theory and Reliable Communication. New York: John Wiley and Sons, 1968. ISBN 0-471-29048-3
  • Goldman, South. Data Theory. New York: Prentice Hall, 1953. New York: Dover 1968 ISBN 0-486-62209-half dozen, 2005 ISBN 0-486-44271-iii
  • Comprehend, Thomas; Thomas, Joy A. (2006). Elements of information theory (2nd ed.). New York: Wiley-Interscience. ISBN0-471-24195-4.
  • Csiszar, I, Korner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems Akademiai Kiado: second edition, 1997. ISBN 963-05-7440-3
  • MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1
  • Mansuripur, M. Introduction to Data Theory. New York: Prentice Hall, 1987. ISBN 0-13-484668-0
  • McEliece, R. The Theory of Data and Coding. Cambridge, 2002. ISBN 978-0521831857
  • Pierce, JR. "An introduction to information theory: symbols, signals and noise". Dover (2nd Edition). 1961 (reprinted by Dover 1980).
  • Reza, F. An Introduction to Data Theory. New York: McGraw-Hill 1961. New York: Dover 1994. ISBN 0-486-68210-2
  • Shannon, Claude; Weaver, Warren (1949). The Mathematical Theory of Advice (PDF). Urbana, Illinois: University of Illinois Press. ISBN0-252-72548-4. LCCN 49-11922.
  • Stone, JV. Chapter 1 of book "Information Theory: A Tutorial Introduction", Academy of Sheffield, England, 2014. ISBN 978-0956372857.
  • Yeung, RW. A First Course in Information Theory Kluwer Academic/Plenum Publishers, 2002. ISBN 0-306-46791-7.
  • Yeung, RW. Information Theory and Network Coding Springer 2008, 2002. ISBN 978-0-387-79233-0

Other books [edit]

  • Leon Brillouin, Science and Data Theory, Mineola, Due north.Y.: Dover, [1956, 1962] 2004. ISBN 0-486-43918-6
  • James Gleick, The Information: A History, a Theory, a Inundation, New York: Pantheon, 2011. ISBN 978-0-375-42372-7
  • A. I. Khinchin, Mathematical Foundations of Data Theory, New York: Dover, 1957. ISBN 0-486-60434-ix
  • H. S. Leff and A. F. Rex, Editors, Maxwell's Demon: Entropy, Information, Computing, Princeton University Press, Princeton, New Jersey (1990). ISBN 0-691-08727-X
  • Robert One thousand. Logan. What is Information? - Propagating Organization in the Biosphere, the Symbolosphere, the Technosphere and the Econosphere, Toronto: DEMO Publishing.
  • Tom Siegfried, The Bit and the Pendulum, Wiley, 2000. ISBN 0-471-32174-5
  • Charles Seife, Decoding the Universe, Viking, 2006. ISBN 0-670-03441-Ten
  • Jeremy Campbell, Grammatical Man, Touchstone/Simon & Schuster, 1982, ISBN 0-671-44062-iv
  • Henri Theil, Economics and Information Theory, Rand McNally & Company - Chicago, 1967.
  • Escolano, Suau, Bonev, Data Theory in Reckoner Vision and Pattern Recognition, Springer, 2009. ISBN 978-ane-84882-296-2
  • Vlatko Vedral, Decoding Reality: The Universe every bit Breakthrough Data, Oxford University Press 2010. ISBN 0-19-923769-7

MOOC on data theory [edit]

  • Raymond W. Yeung, "Information Theory" (The Chinese University of Hong Kong)

External links [edit]

  • "Information", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
  • Lambert F. L. (1999), "Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Periodical of Chemical Instruction
  • IEEE Data Theory Club and ITSOC Monographs, Surveys, and Reviews

How to Read the Fundamental Equations of Evolutionary Change in Terms of Information Theory

Source: https://en.wikipedia.org/wiki/Information_theory

0 Response to "How to Read the Fundamental Equations of Evolutionary Change in Terms of Information Theory"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel