**Why the web page?- To introduce Algorithmic Information to
non-specialists **

Algorithmic Information Theory (AIT),
or Kolmogorov Complexity as it is usually known to mathematicians, is a
potentially useful tool for inquiring into natural systems. However, as AIT has not been widely used
outside of mathematics, it is not clear how to apply the approach to
natural systems. This web page hopefully will allow those less familiar
with the mathematics of Algorithmic Information Theory to access the
key ideas . While the web page provides links to relevant articles,
(see links in the left hand column) the focus is on my book
“Algorithmic Information Theory for physicists and natural scientists”.
This is available in pdf format in the
link Algorithmic Information Theory bookpdf.
MORE

**Key concepts of Algorithmic Information Theory **

**Any structure**or system can be specified by a string of (usually binary) symbols.

**A Universal Turing Machine**(UTM) is a computational device that can simulate any meaningful calculation undertaken on any other computational device. In practice, the typical computer is a UTM. As is a programmable calculator that has a branching instruction equivalent to “GO TO IF ZERO”. The natural world itself is a UTM. The physical laws that shift one state to another are the programme.

**A short description = an ordered system.**The core principle of AIT is that those systems that have short descriptions are more ordered. In a nutshell, the

**length**of the shortest algorithm, able to specify a system on a standard reference UTM defines the system’s algorithmic complexity or its Kolmogorov complexity. This length is also called “the programme sized complexity”.

**Information measure.**The length of this algorithm in binary form is a measure of the information content embodied in the system. This measure is consistent with, but not identical to, Shannon information.

**Algorithmic Entropy.**This is the algorithmic complexity defined above, but the coding set is restricted so that no code can be a prefix of any other. The coding used is called "self-delimiting coding" or for some obscure reason, the ambiguous "prefix coding". Only entropy differences have physical significance. As the length of the algorithmic routine needed to simulate one UTM on another cancels for entropy differences, measurements of entropy differences are independent of the UTM used. What is perhaps more important, the algorithm that specifies the physical laws can be taken as given by setting an appropriate entropy zero. In which case entropy differences between two states in a real world system primarily depend on the changes in the state settings. The atoms and molecules in a real world system are like generalised gates, the equivalent of AND and OR. Nevertheless, for a real world computer, entropy difference can always be specified in binary terms.

Algorithmic entropy and traditional entropies. In contrast to traditional entropies, given an appropriate entropy zero, the algorithmic entropy is the length of the algorithmic that exactly specifies a particular state. However the expected value of the algorithmic measure for a typical state in a thermodynamic macrostate returns the same value as the Shannon entropy and, allowing for units, the Boltzmann entropy. This is because the number of bits the algorithm needs to identify a particular state in a macrostate is virtually the same as the number bits needed to define the Shannon entropy of the macrostate.

The Provisional Algorithmic Entropy . While the algorithmic entropy of a typical state is virtually the Shannon entropy, where a particular state is not typical (i.e. shows order) the algorithmic entropy will be smaller. An ordered state belongs to a subset of the macrostate and is therefore not typical. The provisional entropy is the best guess of the algorithmic entropy given the available information. It is equivalent to Kolmogorov's Algorithmic Minimum Sufficient Statistic the algorithmic equivalent of Zurek's physical entropy and is a discrete form of the Gác's entropy which is effectively a generalisation that allows for infinite strings. The provisional entropy is particularly useful as it provides an entropy measure for a state in a non-equilibrium macrostate. The measure of a particular microstate involves specifying the set (or subset) the state belongs to, plus an algorithm that identifies the particular state within the set. MORE

**The non quantum universe**operating under physical laws is itself a UTM, as is your lap top, even though it is an insignificant part of the universe. In principle, given sufficient time and storage, your lap top can simulate the physical laws that drive one specific state of the universe to another, provided that the computation on the laptop can recognise the final state and then halt. There is an additional obvious proviso that the laptop itself must be excluded from the universe it describes.

**Conceptual issues.**In the AIT approach highly ordered structures have low entropy, low algorithmic complexity and low information content. On the other hand, a random string, specifying a random array of constituents, has maximum algorithmic complexity, maximum entropy and maximum information content. Both AIT and Shannon’s information theory assign the most information to the most random strings. The reason is that more bits are required to define or predict an outcome when there is greater uncertainty and, as a consequence, the information is greater. On the other hand, scientists often use words differently. Some scientists use an intuitive idea of information which, in contrast to AIT, assigns the greatest information content to the most ordered structures. Similarly scientists tend to identify the most complex structures as those that are complicated rather than simple, but which are not random. Any discussion on information or complexity needs to clarify the definitions used and how these might related to the mathematical definitions of AIT and Shannon.