Tony Smith's Home Page

Classical and Quantum Information Theory

Golay Codes, Nonlinear Codes, and Gray Codes

Quantum Reed-Muller and Golay Codes

Quantum Computing

Quantum Cellular Automata

Quantum Game Theory


Information is the underlying basis of the Quantum Consciousness of All Life in ALL SPACES.

Physical Wavelets provide a connection between the World of Physics and the World of Information. Paola Zizzi's ideas of our Universe as a Computer may make that connection more concrete.
Quantum Computing and Information Theory, a promising New Technology, is based on the Galois field GF(4) = {0,1,w,w^2} where w = (1/2)( - 1 + sqrt(3) i ) and w^2 = (1/2)( - 1 - sqrt(3) i ).
{0,1,w,w^2} corresponds to 
the Dynkin diagram of the D4 Lie algebra Spin(8) 
used in the D4-D5-E6-E7-E8 VoDou Physics model, such that:  
0 corresponds to the 28-dimensional bivector adjoint representation; 
1   corresponds to the 8-dimensional vector representation; 
w   corresponds to one 8-dimensional half-spinor representation; and  
w^2 corresponds to the other 8-dimensional half-spinor representation.  
{0,1,w,w^2} generates the hexagonal lattice of Eisenstein Integers 
in the Complex Plane; corresponds to the quaternions {1,i,j,k}; 
corresponds to the Tai Hsuan Ching; and 
corresponds to symmetries of the Torah such as 
the (3,10) Torus Knot and the 3x3x3 Magic Cube.  
Quantum Information Theory has the same structure as Quantum Field Theory.
The D4-D5-E6-E7-E8 VoDou Physics model has a natural representation 
on each of the three types of substrate described in the BARDOs: 
material (Massive Spacelike beings); 
luminous dharmata (Massless Lightcone beings); and 
clear light (Many-Worlds abstract beings).  

  Human life is based on interactions of chemical elements on earth.   Chemical elements are made of Massive Spacelike matter (electrons and quarks) by the laws of particle physics, represented by the D4-D5-E6-E7-E8 VoDou Physics model.   Interactions of chemical elements occur through interactions with gauge bosons (such as photons) by the laws of particle physics, represented by the D4-D5-E6-E7-E8 VoDou Physics model.  
  Massless Lightcone paricles can carry and process information.   Gamma-ray laser beams using stars as gravitational lenses could carry information throughout a galaxy.   The efficiency with which photon (and/or graviton) beams carry information is determined by how the information is coded.


Classical Shannon Information Theory

( compare Quantum Information Theory )

Lachmann, Newman, and Moore, in cond-mat/9907500, say:

"... It has been well-known since the pioneering work of Claude Shannon in the 1940s that a message transmitted with optimal efficiency over a channel of limited bandwidth is indistinguishable from random noise to a receiver who is unfamiliar with the language in which the message is written.

In this letter we demonstrate an equivalent result about electromagnetic transmissions.

We show that when electromagnetic radiation is used as the transmission medium, the most information-efficient format for a given message is indistinguishable from black-body radiation to a receiver who is unfamiliar with that format. The characteristic temperature of the radiation is set by the amount of energy used to make the transmission.

If information is not encoded in the direction of the radiation, but only its timing, energy or polarization, then the most efficient format has the form of a one-dimensional black-body spectrum

which is easily distinguished from the three-dimensional case. ...

... For the case where the transverse momentum of photons is not used to encode information, or for broadcast transmissions, the spectrum is indistinguishable from that of a one-dimensional black body. We have also shown that the characteristic temperature of the message is simply related to the energy used to send it, and we have derived an upper limit on the rate at which information can be transmitted in both cases. ...

... we speculate that results similar to those presented here may apply to transmissions using other radiative media as well. Since many natural processes maximize the Gibbs-Boltzmann entropy, they should give rise to spectra indistinguishable from optimally efficient transmissions. For instance, if we had a transmitter that can emit any type of particle (rather than just photons), it seems plausible that the optimal spectrum of particle types and energies would be that of Hawking black-hole radiation...".

Classical Information Theory may be related to some aspects of Urim v'Tumim. Here, paraphrasing from the books

are some details about

Classical Information Theory of Classical Error-correcting Codes, especially Golay Codes.
Error-correcting codes were discovered in mid-20th century after R. W. Hamming got irritated by his computer stopping when it encountered an error, causing him to realize that if his computer could detect errors it should be able to locate and correct them.

C24 is a binary Golay code [24,12,8] is a code of length 24, dimension 12, and minimal distance 8 over the binary field F2. Of the 2^24 sets of 24 zeroes and ones, 2^12 = 4096 are in C24. They can be divided into classes:

  • 1 that has 24 zeroes;
  • 759 that have 16 zeroes and 8 ones;
  • 2,576 that have 12 zeroes and 12 ones;
  • 759 that have 8 zeroes and 16 ones;
  • 1 that has 24 ones.

The symmetry group of C24 is M24, where M24 is the simple Mathieu group of order 24x23x22x21x20x48. M24 has several interesting subgroups, including:

  • PSL2(23)
  • the sextet group 2^6:3xS6
  • the octad group 2^4:A8 = 2^4:GL4(2)
  • PSL3(4) = M21
  • the trio group 2^6:(S3xL2(7)) = 2^6:(S3xL3(2))
  • the Mathieu group M23
  • the Mathieu group M22 and M22:2
  • the Mathieu group M12 associated with the Steiner system S(5,6,12) and the ternary Golay code C12 [12,6,6] with 3^6 = 27x27 = 729 words. C12 can be constructed using Rubik-icosahedron twist-permutations.

An octad is a Golay codeword of weight (distance from the origin) 8. The octads of C24 constitute a Steiner system S(5,8,24). The Steiner system S(5,8,24) is a set with 24 points and a collection of distinct 8-subsets (called blocks) such that any 5-subset is contained in exactly 1 block.

The MOG is a 4x6 binary array, so that it would carry 24 bits of information.

The hexacode C6 is a 3-dimensional code of length 6 over F4. C6 has 36+12+9+6+1 = 64 = 2^6 = 4^3 = sqrt(4^6) hexacodewords.

The Mathieu group M24 is transitive on trios consisting of three disjoint octads. The subgroup fixing a trio is a group 2^6:(S3xL2(7)) it being significant that L2(7) is isomorphic to L3(2). and is Klein's group of order 168 = 7x6x4, and is the automorphism group of the Hamming binary code H7 [7,4,3].

By fixing two points of the 24 we obtain the Mathieu group M22 which is the stabilizer of the Steiner system S(3,6,22) obtained by deleting those two points from all octads of S(5,8,24) that contain them both.

 In math.CO/0207208, Hammons, Kumar, Calderbank, Sloane, and Sole say:

"...The classical theory of cyclic codes, which includes BCH, Reed-Solomon, Reed-Muller codes, etc., regards these codes as ideals in polynomial rings over finite fields. Some famous nonlinear codes found by Nordstrom-Robinson, Kerdock, Preparata, Goethals and others, more powerful than any linear codes, cannot be handled by this machinery. We have shown that when suitably defined all these codes are ideals in polynomial rings over the ring of integers mod 4. This new point of view should completely transform the study of cyclic codes. ... Kerdock codes contain more codewords than any known linear code with the same minimal distance (although we are not aware of any theorem to guarantee this, except at length 16) ... Kerdock and Preparata codes exist for all lengths n = 4^m > 16. ... Kerdock and `Preparata' codes are duals over Z4 - and the Nordstrom-Robinson code is self-dual ... the Kerdock code is simply the image of the quaternary code (when extended by an zero-sum check symbol) under the Gray map ... the Gray map translates a quaternary code with high minimal Lee or Euclidean distance into a binary code of twice the length with high minimal Hamming distance. ...

... The Kerdock and Preparata codes of length 16 coincide, giving the Nordstrom-Robinson code. This is the unique binary code of length 16, minimal distance 6, containing 256 words. In this case K ... the cyclic code of length 16 over Z4 ...[plus]... adjoining a zeo-sum check symbol. ... is the ... octacode ... the unique self-dual quaternary code of length 8 and minimal Lee weight 6, or ... the 'glue code' required to construct the 24-dimensional Leech lattice from eight copies of the face-centered cubic lattice. ... The Nordstrom-Robinson code is the binary image of the octacode under the Gray map. ...

... In communication systems employing quadrature phase-shift keying (QPSK), the preferred assignment of two information bits to the four possible phases is the one ... in which adjacent phases differ by only one binary digit. This mapping is called Gray encoding and has the advantage that, when a quaternary codeword is transmitted across an additive white Gaussian noise channel, the errors most likely to occur are those causing a single erroneously decoded information bit. ... The crucial property of the Gray map is that it preserves distances. ... from (Z4^n, Lee distance) to (Z2^2n, Hamming distance) ...

... The quaternary octacode has an automorphism group of order 1344, whereas the group of the binary Nordstrom-Robinson code has order 80640. ...

... The Kerdock and `Preparata' codes are Z4-analogues of first-order Reed-Muller and extended Hamming codes, respectively. All these codes are extended cyclic codes over Z4, which greatly simplifies encoding and decoding. ... Binary first- and second-order Reed-Muller codes are also linear over Z4, but extended Hamming codes of length n > 32 and the Golay code are not. ... the Nordstrom-Robinson code is Z4-linear ... and is closely connected with the ... [24, 12, 8] Golay code ...[which, although linear in its normal formulation,]... is not Z4-linear. ...".

In their book The Theory of Error-Correcting Codes (North-Holland Elsevier 1977), MacWilliams and Sloane say:

"... The extended Golay code G24 may be used to construct ... The Nordstrom-Robinson code N16 ... divide up the [G24] codewords according to their values on the first 7 coordinates: there are 2^7 possibilities, and for each of these there are 2^12 / 2^7 = 32 codewords. thus there are 8 x 32 = 256 codewords which begin either with seven 0's (with 8th coordinate 0), or with six 0's and a 1 (with 8th coordinate 1) ... The Nordstrom-Robinson code N16 is obtained by deleting the first 8 coordinates from these 256 vectors. ... N16 is a (16, 256, 6) code ... made up of a linear [16,5,8] code ... plus 7 of its cosets in G24 ... N16 is not linear ...".

In their book Sphere Packings, Lattices, and Groups (3rd ed., Springer 1999), Conway and Sloane say:

"... If a corresponding packing ...[to]... the Nordstrom-Robinson code ... were to exist, it would set new records for the density and kissing number in 16 dimensions. No such lattice exists ... and we conjecture that there is also no such nonlattice packing. ...".
Coding of information is a problem in 
Three good reference books are: 
Conway and Sloane, Sphere Packings, Lattices and Groups (2nd ed), 
              Springer (1993); 
Roman, Coding and Information Theory, Springer (1992); 
Thompson, From Error-Correcting Codes Through Sphere Packings To  
              Simple Groups, Mathematical Asso. of America (1983). 
The extended Golay code G24 is based on 
the 24-dimensional Leech lattice.  
The automorphism group 
of G24 is the Mathieu group M24.  
It may be related to the Urim v'Tumim.
G24 is the (24,2^12) d=8 self-dual classical code over GF(2), 
which is sometimes denoted (24,12,8) 
because it uses binary 24-blocks with 12 message bits and 
Hamming minimal distance 8.  
G24 has 2^12 = 4096 codewords:  
1       (denoted  0)  is at Hamming distance  0 from zero;   
759     (denoted  8) are at Hamming distance  8 from zero;   
2576    (denoted 12) are at Hamming distance 12 from zero;   
759     (denoted 16) are at Hamming distance 16 from zero;   
1       (denoted 24)  is at Hamming distance 24 from zero.    
The weight distribution can also be written: 
0(1)  8(759)  12(2576)  16(759)  24(1)
The number of dodecads comes from the combinatorial table:  
   0     0    16     0    24     0    16     0     0
      0    16    16    24    24    16     16    0
         16   32    40    48    40    32     16
           48    72    88    88    72     48   
             120   160   176   160   120    
                280   336   336   280     
                   616   672   616       
                     1288  1288        
The number of octads comes from the combinatorial table:  
  30     0    16     0     4     0     0     0     1
     30    16    16     4     4     0     0     1
        46    32    20     8     4     0      1
           78    52    28    12     4      1   
             130    80    40    16     5    
                210   120    56    21     
                   330   176    77       
                      506   253        
The octads of G24 are a Steiner system S(5,8,24), 
so that any 5 points of the 24 are in one unique octad. 
Since G24 can be constructed from its Steiner system, 
G24 can be constructed from its octads, 
and each of its octads can be constructed 
from 5 of its 8 nonzero elements.  
The 24-dimensional Leech lattice can not only be constructed 
as a real 24-dimensional lattice /\R24 
from the binary (24,12,8) Golay code G24 over GF(2), 
it can also be constructed 
as a complex 12-dimensional lattice /\C12 
from the ternary (12,6,6) Golay code G12 over GF(3), 
with Steiner system S(5,6,12).  
G12 has 3^6 = 729 codewords, with weight distribution:  
0(1)  6(264)  9(440)  12(24).    
The 24-dimensional Leech lattice can also be constructed 
as a quaternionic 6-dimensional lattice /\Q6 
from the (6,3,4) hexacode H6 over GF(4).  
H6 has 4^3 = 64 codewords, with weight distribution:  
0(1)  4(45)  6(18).    
The 24-dimensional Leech lattice can be used, 
by 3x3 octonion matrices, 
to construct the D4-D5-E6 model.  
The automorphism group of the Leech lattice is 
the Conway group .0 (dotto).   
The 24-dimensional Leech lattice represents 8-dimensional space-time, 
the 8 first generation fermion particles, and 
the 8 first generation fermion antiparticles.  
Each vertex has 196,560 nearest neighbors, 
whose permutation group is dotto. 
196,560 + 300 + 24 = 196,884 
(300 = 25x24/2 = symmetric tensor square of 24)  
is the dimension of a representation space of the Monster group, 
whose order is the product 
2^46 3^20 5^9 7^6 11^2 13^3  17 19 23 29 31 41 47 59 71, 
or about 8 x 10^53.  
IF the 196,560 points form a group, 
just as the 240 of an E8 lattice form unit octonions 
and as the 24 of a D4 lattice form unit quaternions 
(this is a goal of the current work of Geoffrey Dixon), 
then it should be possible to form the 196,560 dimensional space 
of the group algebra of the Leech lattice nearest-neighbor group, 
and then add the 300-dim space of symmetric squared Leech lattice, 
and then add the 24-dim space of the Leech lattice itself, 
to get the 196,884-dim representation space of the Monster.  
By going down to the underlying 24-dim Leech lattice space, 
it should be possible to 
represent the Monster on the 24-dim space of the Leech lattice.  

The New Technology of Quantum Computing is based on

Quantum Information Theory,

which differs from Classical Information Theory.

David Deutsch, who has a parallel universes web site, has described a Universal

Quantum Computer


Many-Worlds Quantum Theory.

Michael Gibbs has a WWW page with a Basic Introduction to Quantum Information Theory.

The Centre for Quantum Computation at the University of Oxford has a good web page. Another good WWW site on quantum computation is Other good websites are at Caltech and the web pages of Samuel Braunstein.

In quantum information theory, Cerf and Adami describe virtual qubit-anti-qubit pairs (they call them ebit-anti-ebit pairs) that are related to negative conditional entropies for quantum entangled systems and are similar to fermion particle-antiparticle pairs of particle physics.

Cerf and Adami have written a paper "Prolegomena to a Non-Equilibrium Quantum Statistical Mechanics" in which they "... suggest that the framework of quantum information theory ... is a reasonable starting point to study non-equilibrium quantum statistical phenomena. As an application, [they] discuss the non-equilibrium quantum thermodynamics of black hole formation and evaporation. ... "

In Baez Week 34, John Baez describes quantum computing:

It's easiest to see why machines that take advantage of quantum theory might be efficient at computation if we think in terms of path integrals. In Feynman's path-integral approach to quantum theory, the probability of getting from state A at time zero to state B some later time is obtained by integrating the exponential of the action


over *all* paths from A to B, and then taking the absolute value squared. (Here we are thinking of states A and B that correspond to points in the classical configuration space.) We can think of the quantum system as proceeding along all paths simultaneously; it is the constructive or destructive interference between paths due to the phases exp(iS/hbar) that makes certain final outcomes B more likely than others. In many situations, there is massive destructive interference except among paths very close to those which are critical points of the action S; the latter are the *classical* paths. So in a sense, a classical device functions as it does by executing all possible motions; motions far from those satisfying Newton's laws simply cancel out by destructive interference! (There are many other ways of thinking about quantum theory; this one can be difficult to make mathematically rigorous, but it's often very handy.)

This raises the idea of building a computer that would take advantage of quantum theory by trying out all sorts of paths, but making sure that paths that give the wrong answer cancel out! It seems that Feynman was the first to seriously consider quantum computation: Simulating physics with computers, by Richard Feynman, International Journal of Theoretical Physics, Vol. 21, nos. 6/7, pp. 467--488 (1982).

The article of Baez is motivated by a paper of Peter Shor, described by David DiVincenzo as showing that on a quantum computer, factoring large numbers that have only large prime factors can be performed in polynomial time, as opposed to exponential time that is thought to be required by a classical computer. The main result of DiVincenzo's paper is that 2-bit gates are universal for quantum computation.

Quantum information processing rules are somewhat different from those of classical information theory. For instance (Ekert, Nature 367 (10 Feb 94) 513-514) Benjamin Schumacher has shown that for quantum 2-state systems there is a lower bound on the number of quantum bits per symbol and that the lower bound is LESS than the classical entropy of the Shannon noiseless coding theorem. The relevant difference is that two distinguishable symbol-state preparations can produce two different symbol-states that CANNOT be reliably distinguished.

Walsh functions (boolean analogues of Fourier basis functions) are a class of balanced functions that are distinguishable quickly and WITHOUT ERROR by quantum computation. A classical computer could distinguish them quickly only if some (perhaps small) error were to be allowed. (Bennett, Nature 362 (22 Apr 93) 694-695, describing the work of David Deutsch and Richard Jozsa, and of Ethan Bernstein and Umesh Vazirani)

Pertti Lounesto, in Chapter 21 of his book Clifford Algebras and Spinors (Cambridge 1997), deals with Binary Index Sets and Walsh Functions.

The similarity of the balanced function illustrations to bar codes makes me think of bosonic optical quantum computing rather than fermionic solid-state quantum computers, which may be very hard to fabricate due to decoherence.

Anglin, Paz, and Zurek mention experiments by Brune et al in which the increase of the decoherence rate as the square of the separation scale is confirmed over a limited range of separations. Anglin, Paz, and Zurek show that decoherence can be very complicated due to such things as colored noise, dissipative terms with memory, backreactions, temporal and spatial non-linearity, and non-locality. Such things can cause saturation of decoherence at long distances and other interesting things.

Gershenfeld and Chuang (Science 275 (17 Jan 97) 350-356), along with Cory, Fahmy, and Havel, (see Science 275 (17 Jan 97) 307-309, article by Gary Taubes; and New Scientist, 1 Feb 97, p. 16, article by Howard Baker) propose to avoid decoherence of superposition by coding quantum information by the spin states of nuclei of complex molecules, and then observing many (about 10^23) such molecules in solution using NMR. Such nuclear spin states can remain coherent for thousands of seconds, as the nuclei are screened by atomic electron clouds and their rapid motion averages out external interactions. As the NMR observation looks at a given type of nucleus in all the molecules statistically over a period of time, instead of a single nucleus in a single molecule at one time, the act of observation may not destroy superposition. Since caffeine is a good molecule for this, quantum compters may be realized by the Infinite Improbability Drive mechanism of Douglas Adams, a "really hot cup of tea". However, with molecules similar to caffeine, they have only worked with up to 4 bits of information. To do interesting quantum calculations, you need around 50 to 100 bits. As the NMR signal gets weaker rapidly as the number of spins in the molecule increases, there are now significant practical difficulties with interesting quantum calculations.

Vandersypen, Steffen, Breyta, Yannoni, Sherwood, and Chuang, in their paper Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance, quant-ph/0112176, say:

"... we report an implementation of the simplest instance of Shor's algorithm: factorization of N=15 (whose prime factors are 3 and 5). We use seven spin-1/2 nuclei in a molecule ...

.... as quantum bits, which can be manipulated with room temperature liquid state nuclear magnetic resonance techniques. This method of using nuclei to store quantum information is in principle scalable to many quantum bit systems, but such scalability is not implied by the present work. The significance of our work lies in the demonstration of experimental and theoretical techniques for precise control and modelling of complex quantum computers. In particular, we present a simple, parameter-free but predictive model of decoherence effects in our system. ...".

Seth Lloyd has proposed preventing decoherence by using Quantum Controllers for Quantum Systems, and he notes that Quantum Controllers may have many useful applications.

A problem with photon-based computing is that, since photons are U(1) abelian gauge bosons, they do not interact with each other at tree-level. To get interactions for use in computations, you would have to use higher-order interactions, which are suppressed by the electromagnetic fine structure constant, or indirect non-linear interactions intermediated by material such as optically non-trivial crystals or fibres.

For example, Torma and Stenholm have proposed using the Faraday effect and polarized photons to construct quantum computing devices.

Hogg and Chase at Xerox PARC look at how to construct Quantum Smart Matter, in which quantum spatial coherence would be used to control the properties of a material. A possible application of their work could be to actively control the optical properties of a material so that it could mediate interactions among photons and therefore be a component of a photonic quantum computer.

A group at Innsbruck is also studying polarized photons.

One problem that would be a nice application for quantum computing is the travelling salesman problem. A recent classical advance toward solving that problem is using a workstation and 50 software ants on a loop who then evolve (the ants can die or reproduce) to find within 44 hours a solution (accurate to 4 per cent) to the 30,000 point travelling salesman problem. The previous best classical effort used a Cray for 18 months to solve the 3,000 point problem. (Arthur, New Scientist 4 Jun 94, p. 6, describing the work at the BT systems research division of Shara Amin and Jose-Louis Fernandez)

In quant-ph/0201031, Quantum Adiabatic Evolution Algorithms versus Simulated Annealing, Farhi, Goldstone, and Gutman say:

"... quantum adiabatic evolution and simulated annealing perform similarly in certain examples of searching for the minimum of a cost function of n bits. In these examples each bit is treated symmetrically so the cost function depends only on the Hamming weight of the n bits. We also give two examples, closely related to these, where the similarity breaks down in that the quantum adiabatic algorithm succeeds in polynomial time whereas simulated annealing requires exponential time. ... there is no general theorem that quantum adiabatic algorithms must fail if simulated annealing fails. ... Quantum adiabatic evolution algorithms are designed to minimize a (classical) cost function whose domain is the 2^n values taken by n bits. ... Simulated annealing ...[ Simulated annealing has been used in various combinatorial optimization problems and has been particularly successful in circuit design problems (see Kirkpatrick, S., C. D. Gelatt Jr., M. P. Vecchi, "Optimization by Simulated Annealing",Science, 220, 4598, 671-680, 1983). ]... is a classical local search strategy that can be used to find the global minimum of a function ... The idea is to construct a Markov chain that starts in the tau = infinity Boltzmann distribution ...[for]... all 2^n strings ... equally likely ... and gradually moves through the Boltzmann distributions for decreasing tau down to tau near 0. If the process succeeds, the final distribution will be close to the zero temperature Boltzmann distribution, which means that the global minimum of ...[the function]... has been found with high probability. ... The question is how many transitions must be made in order to stay near the Boltzmann distributions as the temperature is lowered down to near 0. If the total number of steps required grows only as a polynomial in n, we say that the simulated annealing algorithm succeeds. ... When the cost depends smoothly on the Hamming weight of the n bits, quantum adiabatic evolution and simulated annealing typically perform similarly. ... however ... there are examples where this similarity breaks down.In the spike example, the cost function has a barrier that can be penetrated quantum mechanically but not classically, so the quantum algorithm succeeds in polynomial time whereas annealing does not. In the bush example, the addition of a single spin-1/2 leads to quantum behavior with no classical analogue. ...".


In quant-ph/9503017, Barenco, Deutsch, and Ekert describe a quantum controlled-NOT gate that might be realized by selective driving of optical resonances of two subsystems undergoing a dipole-dipole interaction.

A possible example of such optically-linked dipole-dipole subsystems might be neural microtubules with light carried through water within the microtubules.

In quant-ph/9505018, Deutsch, Barenco, and Ekert show that almost every quantum computation gate that operates on at least 2 bits is a universal gate. They say:

"We conjecture that the non-universal gates are precisely

the 1-bit gates and collections of 1-bit gates; and

the classical gates.

If true, this would reveal an interesting conection between the existence of a 'classical level' in physics (i.e. a regime in which classical physics is a good approximation to quantum physics) and the existence of classical computation as a closed and stable regime within quantum computation."


"... the intuitive nature of the classically-available computational operations, ... allowed pioneers ... to capture the correct classical theory by intuition alone, and falsely to assume that its foundations were self-evident or at least purely abstract. (There is an analogy here with geometry, another branch of physics that was formerly regarded as belonging to mathematics.)"


Quantum Information Theory

( compare Classical Information Theory )

Quantum Information Theory may be related to some aspects of Urim v'Tumim.

Calderbank, Rains, Shor, and Sloane 
describe error correction in quantum codes.  
As they say, given the quantum state space C^2^n of n qubits, 

"... The known quantum codes seemeed to have close connections to a finite group of unitary transformations of C^2^n, known as a Clifford group, ... [containing] all the transformations necessary for encoding and decoding quantum codes. It is also the group generated by fault-tolerant bitwise operations performed on qubits that are encoded by certain quantum codes. ...

... the unique [[2; 0; 2]] code corresponds to the quantum state (1/sqrt(2))( | 01 > - < 10 | ), that is, an EPR pair. ...".

Calderbank, Rains, Shor, and Sloane show that 
whereas many useful classical-error-correcting codes are binary, 
over the Galois field GF(2) = {0,1},  
quantum-error-correcting codes are quaternary, 
over the Galois field GF(4) = {0,1,w,w^2} 
where w    =  (1/2)( - 1 + sqrt(3) i ) 
and   w^2  =  (1/2)( - 1 - sqrt(3) i ). 
Some interesting binary classical codes can be used 
to construct quaternary quantum codes.  For example, 
the Golay code (24,2^12) d=8 self-dual classical code over GF(2), 
which is sometimes denoted (24,12,8) 
because it uses binary 24-blocks with 12 message bits and 
Hamming minimal distance 8, 
should give a quantum code [[ 24, 0, 8 ]]  
mapping a quantum state space of 24 qubits into 24 qubits, 
correcting [(8-1)/2] = 3 errors, and detecting 8/2 = 4 errors.  
They note that codes with only one codeword CAN be useful, 
either to test decoherence times of specific storage bins 
or to construct other codes with more codewords.  
They give another interesting example:  concatenating 
the Hamming code [[ 5, 1, 3 ]] with itself to get [[ 25, 1, 9 ]].  
They also state that, although 
there is no classical (24,4^12) d=10 over GF(4), 
it is unknown whether 
there exists an additive (even) self-dual (24,2^24) d=10 code.  

Andrew Steane, in quant/ph/9802061, shows that

"... classical error correcting code C = [n,k,d] which contains its dual, [Cperp a subset of C], and which can be enlarged to C' = [n,k' [greater than] k+1,d'], can be converted into a quantum code of parameters [[n,k+k'-n,min(d,[3d'/2])]]. This is a generalisation of a previous construction, it enables many new codes of good efficiency to be discovered. Examples based on classical Bose Chaudhuri Hocquenghem (BCH) codes are discussed."



  In hep-th/9302113 Geoffrey Dixon notes that GF(4) is related to quaternions as GF(2) is to complex numbers and as GF(8) is to octonions.   Dixon shows how the division algebras arise from quadratic residue codes over Galois Fields GF(2^k) of powers of 2 that correspond to Galois sequences.   Only for k = 1, 2, 3, that is, for GF(2), GF(4), and GF(8), do you get quadraic residue codes that correspond to Galois sequences, and they give you the complex numbers, quaternions, and octonions.   It does not extend to sedenions, because, although you have a Galois sequence for GF(16), to have a corresponding quadratic residue code, it must be true that 2^4 - 1 = 16 - 1 is a prime number, but 15 is not prime, unlike 2-1 = 1 4-1 = 3 8-1 = 7.   If you go to 2^5 = 32, then you find that 32 -1 = 31 is prime, so you have a quadratic residue code, but the corresponding Galois sequence multiplication does not close, so you do not get a nice closed algebra.     The quadratic residue code of length 1 over GF(2) is [ 1 ] and it is also a Galois sequence for GF(2), and it gives the complex numbers.   The quadratic residue code of length 3 over GF(2) is [ 1 1 0 ] and it is also a Galois sequence for GF(4), and it gives the quaternions.   The quadratic residue code of length 7 over GF(2) is [ 1 1 1 0 1 0 0 ] and it is also a Galois sequence for GF(8), and it gives the octonions.     The Galois sequence is the sequence of coefficients in Z2 of a polynomial in GF(2^k). For instance, the given Galois sequence for GF(8) gives the polynomial over GF(8)   1 + 1x + 1x^2 + 0x^3 + 1x^4 + 0x^5 + 0x^6   (since for all x =/= 0 in GF(8), we have x^7 = 1)       To see how this works with octonions, use the Galois sequence for GF(8) to define:   i j E k J K I   i = 1 1 1 0 1 0 0   j = 0 1 1 1 0 1 0   E = 0 0 1 1 1 0 1   k = 1 0 0 1 1 1 0   J = 0 1 0 0 1 1 1   K = 1 0 1 0 0 1 1   I = 1 1 0 1 0 0 1         This 7x7 matrix can be extended to an 8x8 matrix M:       0 0 0 0 0 0 0 0 = o0   0 1 1 1 0 1 0 0 = o1   0 0 1 1 1 0 1 0 = o2   0 0 0 1 1 1 0 1 = o3   0 1 0 0 1 1 1 0 = o4   0 0 1 0 0 1 1 1 = o5   0 1 0 1 0 0 1 1 = o6   0 1 1 0 1 0 0 1 = o7       Then the octonion product of oj ok = (-1)^(Mjk) (oj + ok) where + is Z2 addition.     If you replace 0 by +1 and 1 by -1, you get Hadamard matrices that are used to construct octonions.    
  Calderbank, Rains, Shor, and Sloane have found a unique [[ 8, 3, 3 ]] quantum code that is an (8,2^5) additive code whose automorphism group is of order 168, and is the semidirect product of the cyclic group of order 3 and the general affine group over GF(8).  
  A common fundamental structure causes quantum-error-correcting codes to be based on GF(4), the hexacode H6 to be related to the Golay codes and Leech lattice, and an RNA code to be based on 4 nucleotides UGAC, taken 3 at a time.  
  Cerf and Adami have shown that information theory of quantum computers can give negative conditional entropies for quantum entangled systems.   Therefore negative virtual information can be carried by particles, and quantum information processes can be described by particle-antiparticle diagrams much like particle physics diagrams.   How does the D4-D5-E6 Model look in terms of Quantum Information?   First, look at the Clifford Algebra structure of the D4-D5-E6-E7-E8 VoDoou Physics model.   Then, look at the paper of Calderbank, Rains, Shor, and Sloane, where they say, given the quantum state space C^2^n of n qubits, "... The known quantum codes seemeed to have close connections to a finite group of unitary transformations of C^2^n, known as a Clifford group, ... [containing] all the transformations necessary for encoding and decoding quantum codes. It is also the group generated by fault-tolerant bitwise operations performed on qubits that are encoded by certain quantum codes. ..."   Now, look at the example of Steane of the Quantum Reed-Muller code [[ 256, 0, 24 ]], which maps a quantum state space of 256 qubits into 256 qubits, correcting [(24-1)/2] = 11 errors, and detecting 24/2 = 12 errors. Let C(n,t) = n! / t! (n-t)! Then   [[ 256, 0, 24 ]] is of the form   [[ 2^n, 2^n - C(n,t) - 2 SUM(0 k t-1) C(n,k), 2^t + 2^(t-1) ]]   [[ 2^8, 2^8 - C(8,4) - 2 SUM(0 k 3) C(8,k), 2^4 + 2^(4-1) ]]   [[ 2^8, 2^8 - 70 - (1+8+28+56) - (1+8+28+56), 16 + 8 ]]   [[ 256, 256 - (1+8+28+56+70+56+28+8+1), 16 + 8 ]]   [[ 256, 16x16 - SUM(0 k 8) 8/\8/\..(k)../\8, 16 + 8 ]]     The quantum code [[ 256, 0, 24 ]] can be constructed from the classical Reed-Muller code (256, 93, 32) of the form   ( 2^8, 2^8 - SUM(0 k t) C(n,k), 2^(t+1) )   ( 2^8, 2^8 - SUM(0 k 4) C(n,k), 2^5 )   ( 2^8, 2^8 - (70+56+28+8+1), 32 )   ( 2^8, 1+8+28+56, 32 )     To construct the quantum code [[ 256, 0, 24 ]] :   First, form a quantum code generator matrix from the 128x256 generator matrix G of the classical code (256, 93, 32) :   | | | | G | 0 | | | | | 0 | G | | | |     Second, form the generator matrix of a quantum code of distance 16 by adding to the quantum generator matrix a matrix Dx such that G and Dx together generate the classical Reed-Muller code (256, 163, 16) : ( 2^8, 1+8+28+56+70, 16 ) :   | | | | G | 0 | | | | | 0 | G | | | | | Dx | 0 | | | |   This quantum code has been made by combining the classical codes (256, 93, 32) and (256, 163, 16), so that it is of the form [[ 256, 93 + 163 - 256, min(32,16) ]] = [[ 256, 0, 16 ]] .   It is close to what we want, but has distance 16. For the third and final step, increase the distance to 16+8 = 24 by adding Dz to the quantum generator matrix:   | | | | G | 0 | | | | | 0 | G | | | | | Dx | Dz | | | |   This is the generator matrix of the quantum code [[ 256, 0, 24 ]] as constructed by Steane.   The two classical Reed-Muller codes used to build [[ 256, 0, 24 ]] are (256, 163, 32) and (256, 93, 16), classical Reed-Muller codes of orders 4 and 3, which are dual to each other. Due to the nested structure of Reed-Muller codes, they contain the Reed-Muller codes of orders 2, 1, and 0 :     Classical Reed-Muller Codes Order of Length 2^8 = 256 ( 256, 1+8+28+56+70+56+28+8+1, 1 ) 8 ( 256, 1+8+28+56+70+56+28+8, 2 ) 7 ( 256, 1+8+28+56+70+56+28, 4 ) 6 ( 256, 1+8+28+56+70+56, 8 ) 5 ( 256, 1+8+28+56+70, 16 ) 4 ( 256, 1+8+28+56, 32 ) 3 ( 256, 1+8+28, 64 ) 2 ( 256, 1+8, 128 ) 1 ( 256, 1, 256 ) 0     In the Lagrangian of the D4-D5-E6 physics model: the Higgs scalar prior to dimensional reduction corresponds to the 0th order classical Reed-Muller code (256, 1, 256), which is the classical repetition code;   the 8-dimensional vector spacetime prior to dimensional reduction corresponds to non-0th-order part of the 1st order classical Reed-Muller code (256, 9, 128), which is dual to the 6th order classical Reed-Muller code (256, 247, 4), which is the extended Hamming code, extended from the binary Hamming code (255, 247, 3), which is dual to the simplex code (255, 8, 128) ;   the 28-dimensional bivector adjoint gauge boson space prior to dimensional reduction corresponds to the non-1st-order part of the 2nd order classical Reed-Muller code (256, 37, 64) .   HERE is a D4-D5-E6 model physical interpretation of higher order classical Reed-Muller codes, written in terms of the graded subspaces of the Clifford algebra Cl(0,8).   The 8 first generation fermion particles and 8 first generation fermion antiparticles of the 16-dimensional full spinor representation of the 256-dimensional Cl(0,8) Clifford algebra corresponds to the distance of the classical Reed-Muller code (256, 93, 16), as well as to the square root of 256 = 16x16, and to the 16-dimensional Barnes-Wall lattice /\16, which lattice comes from the (16,5,8) Reed-Muller code. Each /\16 vertex has 4320 nearest neighbors.   The other 8 of the 16+8 = 24 distance of the quantum Reed-Muller code [[ 256, 0, 24 ]] corresponds to the 8-dimensional vector spacetime, and to the 8-dimensional E8 lattice, which lattice comes from the (8,4,4) Hamming code, with weight distribution 0(1) 4(14) 8(1). It can also be constructed from the repetition code (8,1,1). The dual of (8,1,1) is (8,7,2), a zero-sum even weight code, containing all binary vectors with an even number of 1s. Each E8 lattice vertex has 240 nearest neighbors. In Euclidean R8, there is only one way to arrange 240 spheres so that they all touch one sphere, and only one way to arrange 56 spheres so that they all touch a set of two spheres in contact with each other, and so forth, giving the following classical spherical codes: (8,240,1/2), (7,56,1/3), (6,27,1/4), (5,16,1/5), (4,10,1/6), and (3,6,1/7).   The total 24 distance of the quantum Reed-Muller code [[ 256, 0, 24 ]] corresponds to the 24-dimensional Leech lattice, and to the classical extended Golay code (24, 12, 8) in which lattice each vertex has 196,560 nearest neighbors. In Euclidean R24, there is only one way to arrange 196,560 spheres so that they all touch one sphere, and only one way to arrange 4600 spheres so that they all touch a set of two spheres in contact with each other, and so forth, giving the following classical spherical codes: (24,196560,1/2), (23,4600,1/3), (22,891,1/4), (21,336,1/5), (20,170,1/6), ... .  

According to quant-ph/0301040 by Dorit Aharonov, entitled A Simple Proof that

Toffoli and Hadamard are Quantum Universal:
"... Recently Shi [15] proved that Toffoli and Hadamard are universal for quantum computation. This is perhaps the simplest universal set of gates that one can hope for, conceptually; It shows that one only needs to add the Hadamard gate to make a 'classical' set of gates quantum universal. ... The fact that {T,H} is universal has philosophical interpretations. The Toffoli gate T can perform exactly all classical reversible computation. The result says that Hadamard is all that one needs to add to classical computations in order to achieve the full quantum computation power; It perhaps explains the important role that the Hadamard gate plays in quantum algorithms, and can be interpreted as saying that Fourier transform is really all there is to quantum computation on top of classical, since the Hadamard gate is the Fourier transform over the group Z2. From a conceptual point of view, this is perhaps the simplest and most natural universal set of gates that one can hope for. ... the set {T,H} is [not only] ... the set {T,H} ... generates a dense subgroup in the group of orthogonal matrices [ orthogonal matrices are related to the Dn and Bn Lie algebras and to Clifford algebras ], see ... [15] Y. Shi, Both Toffoli and controlled-Not need little help to do universal quantum computation, quant-ph/0205115 ...".


Quantum Information Theory may be related to the Urim v'Tumim.

Cerf and Adami have shown that information theory of quantum computers can give negative conditional entropies for quantum entangled systems. Therefore negative virtual information can be carried by particles, and quantum information processes can be described by particle-antiparticle diagrams much like particle physics diagrams.

Consequently, the underlying structure of Many-Worlds abstract life forms should be fundamentally similar to that of Massless Lightcone and Massive Spacelike life forms.

The picture of Cerf and Adami is somewhat similar to the Quantum Cellular Automata picture of Meyer.


  UNIVERSAL TRANSFORMATIONS are interpreted by the I CHING.     The 4x4x4 = 64 genetic code, the 2x2x2x2x2x2 = 64 I Ching, and the 8x8 = 64 D4-D5-E6 physics model are all just different representations of the same fundamental structure.   This fundamental structure is not only shared by Golay codes and Leech lattice but also by Penrose tilings and musical sequences.  
  An abstract form of life, Artificial Life (A-Life), can be constructed on a completely abstract substrate. Simple forms of A-Life can be represented on a digital computer.   This form of life is represented by the ManyWorlds FEYNMAN CHECKERBOARD discrete lattice version of the D4-D5-E6 model.   Lattices, such as the Leech lattice, are related to tilings.   Tilings can simulate Turing machines, and therefore process information.   As Chris Hillman at the math department of Un. of Washington noted, tilings can also describe Tiling Dynamical Systems, analogous to Shift Dynamical Systems.

Quantum Game Theory

may be used by Quantum Consciousness to determine Fates.

According to Physics News Update (Number 411) of the AIP, "... Star Trek's Captain Picard and Q ... engage in a hypothetical contest that represents the extension of game theory to the quantum world. With these characters, physicist David Meyer of the math department (Project in Geometry and Physics) at UC-San Diego ... illustrates how playing nanoscopic versions of familiar games with atoms (or any other object which obeys the peculiar rules of quantum mechanics) may reveal new information-processing tasks (beyond already known ones) that quantum computers would perform more efficiently than classical computers. In Meyer's scenario, Q promises Picard that he will help get the Enterprise out of its latest emergency if Picard wins a game. Specifically, the contest amounts to a quantum version of a penny- flipping game, in which an atomic nucleus with "spin-up" and "spin-down" states takes the place of the familiar zinc coin with heads and tails. Through this game, Meyer shows that players like Q who exploit the unique properties of quantum-mechanical objects (such as the ability to put it in a simultaneous combination or "superposition" of two states) enjoy a distinct advantage over those who (like Picard) just treat the objects like everyday items such as balls or coins (which can only be in one state or the other). Through his use of superpositions, Q manipulates the nucleus in such a way that ensures he always wins, even though the chances of winning the classical version of the game are only 50-50. Such a contest, Meyer points out, can be easily demonstrated with the rudimentary quantum computers that now exist, and may provide insights on such things as quantum-error correction. ...".

David Meyer's paper quant-ph/9804010 gives details. Its abstract states: "We consider game theory from the perspective of quantum algorithms. Strategies in classical game theory are either pure (deterministic) or mixed (probabilistic). We introduce these basic ideas in the context of a simple example, closely related to the traditional Matching Pennies game. While not every two-person zero-sum finite game has an equilibrium in the set of pure strategies, von Neumann showed that there is always an equilibrium at which each player follows a mixed strategy. A mixed strategy deviating from the equilibrium strategy cannot increase a player's expected payoff. We show, however, that in our example a player who implements a quantum strategy can increase his expected payoff, and explain the relation to efficient quantum algorithms. We prove that in general a quantum strategy is always at least as good as a classical one, and furthermore that when both players use quantum strategies there need not be any equilibrium, but if both are allowed mixed quantum strategies there must be.". The paper quant-ph/9804010 by David Meyer is, as far as I know, the seminal paper in the field. Further developments include:

 I believe that quantum game theory (not classical game theory) is needed for consciousness modelliing, and that is what my quantum consciousness model does, using:

Clifford Algebra techniques such as

Quantum Zeno Effect techniques such as

and Quantum Error-Correcting Codes.  



Tony Smith's Home Page