The math about the way the Higgs forms when 8-dim spacetime is reduced to 4-dim spacetime can be found on my web site at

http://www.valdostamuseum.org/hamsmith/2002SESAPS.html#hmvev

The math proof was not done by me, but by Meinhard Mayer who used Proposition 11.4 of chapter II of the book Foundations of Differential Geometry, vol. 1, John Wiley 1963, by Kobayashi and Nomizu.

My effort was to give Mayer's math proof a physical interpretation as part of my physics model. I think that Mayer is now a professor emeritus of physics and math at U.C. Irvine.

The geometric volumes related to the physics groups that I use, along with some combinatoric rules, produce the force strengths and constituent masses that are seen in physics. My two most recent papers about that can be found at

http://www.valdostamuseum.org/hamsmith/July2006Update.html#octuninflation

and

http://www.valdostamuseum.org/hamsmith/July2006Update.html#factorsphysics

As long as my physical representation of math objects are used consistently with the structure of the math objects,

that is all that is needed for physics model building. Conventional physicists often do the same thing. For example, AFAIK nobody has given any "math proof" justifying the use of path integral quantization of the Standard Model Lagrangian, but conventional physicists do use that all the time.

However, there is one part of my model where I am using a math conjecture that I have not proven, which is that a lot of real 8-dim Clifford algebras when put together form a generalization of the Hyperfinite II1 von Neumann factor. See

http://www.valdostamuseum.org/hamsmith/2002SESAPS.html#cl1780vnHFII1

I would like to prove that conjecture, or see somebody else prove it, but I have not yet had the time to do it.

As to whether there are "... any new prediction behind your theory ...", the answer is yes. Recently I have put two papers on my web site at

http://www.valdostamuseum.org/hamsmith/CDFSingleT.pdf

and

http://www.valdostamuseum.org/hamsmith/TriVacstab.pdf

They describe my model of the Tquark - Higgs - Triviality system, and they predict that the LHC will see three states of the Higgs boson:

- at about 146 GeV
- at about 180 GeV
- at about 239 GeV.

Since the LHC should be able to see any Higgs state below about 250 GeV soon (within a couple of years or so) after it begins to get results in 2008, my masses of three Higgs states are predictions that should be tested then. There have been some very inconclusive results from Fermilab that encourage me, which I have discussed at

http://www.valdostamuseum.org/hamsmith/ZbbAfb.html

In my model you have (in the fundamental first generation) 8 fermion particles and 8 fermion antiparticles. In the binomial expansion used in my 8-dim Clifford algebra you have

1 8 28 56 70 56 28 8 1

- the 8 1-vectors correspond to the 8 +half-spinor fermion particles and also to the 8 -half-spinor fermion antiparticles by Triality
- the 28 gauge boson Spin(8) generators are bivectors which can be represented as fermion-antifermion pairs.

So, maybe only in the special dimensions of my model (8 reduced to 4 of spacetime) can you formulate bosons as fermion-antifermion pairs.

As to how "... 8 of the [28] 2-grade bivectors [of the 8-dim real Clifford algebra] AFTER DIMENSIONAL REDUCTION TO 4-D PHYSICAL SPACETIME corrspond to the 8 generators of color for SU(3) ? ...", the process is set out in some detail on my web site at

http://www.valdostamuseum.org/hamsmith/Sets2Quarks4a.html#WEYLdimredGB

and from some other equivalent points of view at

http://www.valdostamuseum.org/hamsmith/KQphys.html

http://www.valdostamuseum.org/hamsmith/KQphys.html#GLdimred

Further, this text from e-mail messages I sent to Garrett Lisi back in September 2005 might be useful:

"... There are several levels at which you can look at the Spin(8) group associated with Cl(8). (Here I am using Euclidean signature being sloppy for simplicity.) ... The Lie algebra of Spin(8) is generated by the Weyl symmetries of the root vector polytope of Spin(8), which is the 24-cell in 4-dim Euclidean space. So, you can describe Spin(8) in three ways: As a Lie group. As its Lie algebra. As its root vector polytope. What I agree cannot be done is to factor the Lie group Spin(8) into the Cartesian product of Lie groups U(4) x SU(3) x SU(2) x U(1). ... However, what I contend CAN be done, is to do the decomposition at the level of the root vector polytope, ... the way I decompose the 24-cell plus 4 Cartan dimensions for 28-dim Spin(8) into 12-vertex cuboctahedron plus 4 Cartan dimensions for 16-dim U(4) and 8-vertex cube for SU(3) and line with 4 vertices for U(2) = SU(2) x U(1) is an unambiguous procedure ... However, it requires thinking of the Gauge Groups not as Lie groups, and not even as Lie algebras generating them, but as the root vector polytopes that generate the Lie algebras. Such a way of thinking is clearly defined mathematically, as in texts that describe how to construct Lie algebras from the root vector polytopes. For example, see Chapter 21 (especially section 21.3) of the book Representation Theory by Fulton and Harris (Springer 1991) which does that using the Dynkin diagrams that are associated with the root vector polytopes and therefore the Lie algebras. ... However, such a way of thinking is not common for mathematicians, much less for physicists. Even though it is not a common way of thinking, it does work, in the sense of producing a realistic physics model, and it works much better than the more well-known way of avoiding the Coleman-Mandula theorem by using Lie superalgebras instead of Lie algebras. ... In other words, my work began in the early 1980s, and was motivated by supergravity. I wanted to make a better and more radical departure from Lie group/algebra than the supergravity departure to Lie superalgebra. I read a paper by Saul-Paul Sirag about physics and the Weyl group, and I realized that my departure could be going down to the Weyl group / root vector level, so I began to work on it, and (even though my then-advisor David Finkelstein was unenthusiastic about it) my work on the Weyl group / root vector stuff led me to my present realistic model. ... AFTER you use the root vector process for the decomposition, THEN AND ONLY THEN do you construct Lie algebra/group structures from the decomposed root vector polytopes, and then you have the conventional MacDowell-Mansouri gravity from the U(2,2) and the Standard Model from the SU(3)xSU(2)xU(1), each in their own sandbox (in the computer system sense of the term "sandbox") ...".

As to

http://www.valdostamuseum.org/hamsmith/clif256rules.html

and whether "... there ... Is ... a way to associate to each one of the 256 CA a matrix and stablish a kind isomorfism between Cl(8) and the 1-D CA ? ...".

Effectively, Wolfram does that in his book by assigning to each such automaton a unique number between 0 and 255, which can be written in 8-digit binary form, because the resulting 256 binary numbers can be assigned to grades in the 256-element Clifford algebra according to how many 1s they contain. If the number of 1s had been used, the result would be the Hodge dual of Cl(8), which is also isomorphic to Cl(8).

As to whether I think that all "... the ... CA ... giving ...[the same]... [graphic] results ... [should be seen as equivalent because they] ... give the same information ...",

No, because even if they give the same graphic results, their rules are really fundamentally different, which is why they are assigned different 8-digit binary numbers. However, having the same graph does indicate that their physical properties may be similar. For example, 11000000 corresponds to the U(1) photon and 10100000 corresponds to the SU(2) neutral weak boson, and they both have the same graphic picture of just one point and then blank, which for convenience I will call a "blank" graph, and the SU(2) neutral weak boson acts very much like a photon except that at the low energies where we do experiments the Higgs mechanism gives mass to the SU(2) neutral weak boson, so it acts physically somewhat like a "heavy photon".

As to "... why ... [I] ... consider those 8 2-grade bivectors as the generators of color force SU(3) and not others of the 2-grade bivectors ? ...", I did not do a nice math proof (I think that one could be done, but I have not taken the time to do it yet), but just intuitively looked at the pictures. The breakdown of the 28 that I wanted was:

- 12 "active" ones for the 12 root vector vertices of the cuboctahedron which is the root vector polytope of 16-dim SU(2,2) that I use for gravity
- plus 4 "blank" ones for its 3 Cartan subalgebra elements plus a "blank" one for the propagotor phase

and the other 28 - 12 - 4 = 12 should correspond to

- 1 photon which should be "blank" because its gauge group U(1) is abelian
- 3 weak bosons, one of which should be "blank" because it is the Cartan subalgebra element of SU(2), and two of which should be not quite blank, but opposite in action (charge) to each other and
- the remaining 8 should correspond 8-dimensional SU(3), with 6 of them corresponding to vertices of a hexagon, which is the root vector polygon of SU(3), in a symmetric way, with the final 2 being the Cartan subalgebra elements of SU(3).

(You can see that my choices fit those criteria.)

Note that the 8 graphs for SU(3) look different from the other 16+1+3 = 20 in that they have more "volume", which I think is related to the fact that in my physics model the SU(3) acts globally on the internal symmetry space while the SU(2) and U(1) act internally on the internal symmetry space and the U(2,2) of gravity acts internally on the physical space.

Segrob Siul sent a nice list of problems, which I see as:

- 1 - the dimensional reduction of the original 8-dim spacetime to the observable 4-dim spacetime
- 2 - the "non-demonstration" of the generalized Hyperfinite II1 von Neumann as factor of Cl(8)
- 3 - why quantum mechanics and relativistic invariance are the rules of nature
- 4 - where the laws of physcs come from.

Here are some comments on them:

- 1 - I do the dimensional reduction by considering the 8-dim spacetime to be octonionic, and doing the reduction by introducing (freezing out at lower-than-Planck) energies) a preferred 4-dim quaternionic subspace, but as you point out I have not yet explained in detail how the freezing out selects a preferred quaternionic subspace.
- 2 - As we have discussed, I have not yet proven that Cl(8) is the basis for the generalized Hyperfinite II1 von Neumann factor.
- 3 - I have merely assumed that quantum mechanics comes from a sum-over-histories path integral summation of all possibilities, as used by Feynman, and that this is equivalent to a string theory in which the strings are physically interpreted as world-lines of point particles, giving a Bohm-type potential that is useful in the Penrose picture of quantum consciousness. There is a LOT of work not yet done, but so far I have been encouraged by correct numerical results related to quantum consciousness. I should note that Jack Sarfatti is working on a similar model of quantum consciousness, fermion condensates, the Pioneer anomaly, Dark Energy, etc., and his ideas have been very useful to me.
- 4 - The fundamental 8-dim Lagrangian, which defines the laws
of physics, is put together from parts of the Cl(8) Clifford
algebra. As to why I put each part of the Clifford algebra in each
part of the Lagrangian, I just put them where it seemed to me to
be natural
- vectors to spacetime
- bivectors to gauge bosons
- spinors to fermions

but I do not have any more formal "proof" of why that assignment should be chosen (other than the fact that it works in the sense that it produces realistic particle masses, force strengths, etc).

As to "... where to place in time the initial condition ... How [my] model reconciles with the Big Bang theory ? ...", as Segrob Siul says, in my model "... the Universe starts from the voids by introducing a binary choice ...". Roughly, what happens is that before any space or time or gauge bosons or fermions exist:

- 1 - there is a void
- 2 - a binary choice emerges (0 and 1 or -1 and +1 or yin and yang etc ) As to how this emerges, there are a lot of religio-philosophical ideas/possibilities, but I won't try to go into them in detail, but will use for illustration one example - Chinese I Ching.
- 3 - branching of binary choices lead to the 8 trigrams which "bind" together to give the Clifford algebra Cl(3) which is 8-dimensional and is closely related to but not identical with SU(3) and with octonions. (Note that a standard way to build Clifford algebras is from binary choice, or, equivalently, from set theory - see Barry Simon's book "Representations of Finite and Compact Groups" (AMS 1996) ) (Note that here "bind" means that they interact with each other in a mathematically consistent way to form a mathematical object - the Clifford algebra. As to why a Clifford algebra, see the PS below.)
- 4 - further branching leads to the 64 hexagrams of the I Ching which "bind" together to give the Clifford algebra Cl(6)
- 5 - another branching leads to the 128 elements of Shinto futomani divination and to Cl(7)
- 6 - another branching leads to the 256 odu of IFA and to Cl(8)
- 7 - further branching leads to a huge Clifford algebra Cl(N) for all N no matter how large you choose N to be.
- 8 - Here is where my further hyperfinite factor work is needed. Take the union of ALL the Cl(N), and then complete it as an algebra, which I will call El Aleph (from Luis Borges's writings). I think (but as we have discussed have not proven) that such an algebra exists, and here is why: Any Cl(N) for any large N can be factored by 8-periodicity into a tensor product of N/8 copies of Cl(8) (if N is not divisible by 8, just go to the next higher N that is divisible). So, El Aleph can be regarded as the completion of the union of all the tensor products of finite numbers of the real Clifford algebra Cl(8). (This is the real analogue of the construction of the usual hyperfinite II1 factor as the completion of the union of all tensor products of the 2x2 Complex matrix algebra that is the complex Clifford algebra Cl(2;C), which works because complex Clifford algebras have periodicity 2. Since I am doing pretty much the same thing with the real Clifford algebra Cl(8) and real Clifford algebras have periodicity 8, I am pretty sure that the details will fall into place.) Anyhow, assuming that El Aleph can be so constructed, in it everything is connected to everything else, much like El Aleph of Luis Borges.
- 9 - At this point, space, time, etc can be seen to emerge by looking at different parts of each of its component Cl(8) Clifford algebras and requiring that all the Cl(8) components fit together consistently. For instance, look at each Cl(8) as the physics structure of a point, and let the 8-dim vector part look like a tangent space to a spacetime (of 1 time dimension and 7 space dimensions) containing that point, and then require that all the 8-dim tangent connect up to each other in a consistent way, such as folded up as shown on my page at http://www.valdostamuseum.org/hamsmith/ClifTensorGeom.html (It would be fun to do an animation of the folding-up, but to do it right would require showing it in a 7-dim space with the animation evolving in a 1-dim time. Showing a 3-dim space is easy, by using stereo pairs of 2-dim pictures, but encoding the other 4-dim is what has stopped me so far. Maybe color coding could be used, or maybe little independent 4-dim pictures at each point (sort of like internal symmetry spaces), but it does not seem to me to be easy to do.) The bivectors, spinors, etc would also be required to fit together, and to interact with the space-time to form an 8-dim Lagrangian, from which you get in a very standard way energy, matter, forces, etc.
- 10 - Now we have a huge El Aleph and we can see how it contains all possible states of an 8-dim spacetime with the particles and forces that we know and love. Look at one possible state in which a quantum fluctuation has just emerged at one particular point. That looks like the beginning of a universe such as ours. HERE IS WHERE OUR "TIME" BEGINS. The universe then begins to grow (since octonions are non-unitary - see http://www.valdostamuseum.org/hamsmith/Jun2006Update.html#octuninflation ) in an inflationary phase until it gets too big to maintain its coherence as described by Paula Zizzi - see http://www.valdostamuseum.org/hamsmith/cosm.html#instexp )
- 11 - At the end of inflation, our universe is no longer able to maintain a full 8-dim octonionic spacetime, but falls into a quaternionic structure of 4-dim physical spacetime plus 4-dim internal symmetry space.
- 12 - Our universe continues to evolve (4-dim spacetime) to the present time, in which we have Dark Energy, Dark Matter, and Ordinary Matter in the ratio seen by WMAP etc - see http://www.valdostamuseum.org/hamsmith/AugSuppUpdate.html (Note that my model allows calculation of that ratio, something that AFAIK no other model can come close to doing.)

The12-step program above built our universe out of just one point out of the huge El Aleph, so that El Aleph is really big and really comprehensive, like the Vedic unified Krishna.

PS - As to why I use Clifford algebra, it is because other math structures seem too limited. For instance: Lie algebras are included in the Clifford algebra bivectors, and so are not as comprehensive; Exterior (Grassmann) algebras don't have spinors; etc. Maybe that is not a formal justification, and I should just say that after trial and error, the Clifford algebras work in the sense that the physics model constructed with them gives the "right answers" when compared with experimental results (which I regard as the voice of g-d). PPS - You might also ask why I use Lagrangian structure, and I can only say that to me Lagrangians seem natural (there are physicists who disagree and don't like Lagrangians, but here, for once, I am on the side of the conventional physicists), and Lagrangians are a very effective way to describe the physics that we see in experiments.

As to using the isomorphism between CAs and Cl(8) to construct an "... operation between CAs so that computer experiments can be performed and reproduce the results from ...[my]... model (locally, because the isomorphism is with Cl(8) ) ...", that would be nice, and it need not be only local because tensor products of Cl(8) make up larger-than-local neighborhoods and tensor products of the CA operation should be definable and workable.

On a large scale, what I would like to see would be to use such computer experiments to describe basins of attraction etc to see how our future (or possible futures) might look, by considering our futures to be described by quantum game theory played out among the possibilities of Many-Worlds quantum theory. For a very much oversimplified example see http://www.valdostamuseum.org/hamsmith/ManyFates.html#example

Such basins of attraction have been described for individual CAs (see for example the book "The Global Dynamics of Cellular Automata" by Wuensche and Lesser (Addison-Wesley 1992)) and it would be fun to see that for lots of CAs acting as Clifford algebra elements.

On a smaller scale, maybe such an operation would allow computer CA experiments that would make difficult calculations (such as QCD SU(3) color force calculations) easier. As to whether that approach to QCD might work, when I look at 2D successes with respect to fluid dynamics (see the book "Lattice Gas Methods for Partial Differential Equations" ed. by Gary Doolen (Addison-Wesley 1990)) I get optimistic, but when I note that Wolfram even as late as his New Kind of Science book seems to have made no substantial progress with respect to QCD calculations, I get pessimistic. However, maybe Wolfram never thought about combining CAs with Clifford algebras and using the resulting operation.

As to "... whether Clifford algebra could be applied to cellular automata in Planck scale geometry, whether e.g. each Planck volume could be considered a discrete cell in a cellular automaton. ...", that seems to me to be related to the work of Paola Zizzi on the universe as a big quantum computer. In checking the web, I saw at http://www.quantumbionet.org/eng/index.php?pagina=60 a description of her current work, and an article The "Poetry of a Logical Truth" by her at http://www.quantumbionet.org/eng/index.php?pagina=84

As to Paula Zizzi saying "and corresponds to a superposed state of 10^9 quantum registers", which is much smaller than the number of tubulins in the human brain, she and I discussed that back in 2000 and we agreed, as she said, that "... As far as the number of tubulins is concerned ... the total number of them in our brain is 10^18 ... the selected quantum register, which is the n=10^9, contains 10^18 qubits...".

Here is why I have been using 10^18 tubulins per human brain: The human brain contains about 10^11 neurons; and there are 10^7 Tubulin Dimers per neuron. As references, see the Osaka paper QUANTUM COMPUTING IN MICROTUBULES - AN INTRA-NEURAL CORRELATE OF CONSCIOUSNESS? by Stuart Hameroff in which he mentions: "... the human brain's 10^11 neurons ..." and the Orch OR paper http://www.quantumconsciousness.org/penrose-hameroff/orchOR.html Orchestrated Objective Reduction of Quantum Coherence in Brain Microtubules: The "Orch OR" Model for Consciousness by Stuart Hameroff and Roger Penrose in which they say: "... Yu and Baas (1994) measured about 10^7 tubulins per neuron. ...". Their Yu and Baas (1994) reference is "... Yu, W., and Baas, P.W. (1994) Changes in microtubule number and length during axon differentiation. J. Neuroscience 14(5):2818-2829....".

The Osaka paper was on the web some years ago, and I downloaded its text back then, and my quote from it is from that text download. I don't know exactly where to find it on the web nowadays. A reference for the number of neurons that is on the web now is PHYSICAL REVIEW E, VOLUME 65, 061901(Received 2 May 2000; revised manuscript received 7 August 2001; published 10 June 2002) Quantum computation in brain microtubules: Decoherence and biological feasibility by S. Hagan, S. R. Hameroff2 and J. A. Tuszynski at http://www.quantumconsciousness.org/pdfs/decoherence.pdf where they mention "... about 10^8 neurons approximately 0.1%-1% of the entire brain) ...".

The Samsonovich-Hameroff et al ideas about patterns of excited tubulins in a microtubule do remind me of the cellular automata patterns that may be isomorphic to the 256 elements of the Cl(8) Clifford algebra. Where my ideas may differ from Samsonovich-Hameroff et al may be in their ACT (acusto-conformational transformation) mechanism by which MTs communicate with each other. My idea is that the communication is by resonant gravity connection. It is based on Penrose's use of gravity as an Orch OR mechanism and on generalizing to gravitational gravitons Carver Mead's description of resonance causing atomic emission of electromagnetic photons - see http://www.valdostamuseum.org/hamsmith/QuanConResonance.html#resonance

Samsonovich-Hameroff et al describe two ways for ACT to work:

- 1 - "... two neighboring coupled MTs (or two parts of the same MT) ..." and
- 2 - "... a global ACT, similar to a non-linear wave, propagating within cells along the cytoskeleton (and possibly from one cell to another) ... similar to ... nonlinear neural network models with feedback ...".

I think that 1 is too slow in propagating throughout the brain. 2 is basically the picture that I have, but I use resonant gravity as the basic underlying force connecting all the MTs, and although they do use Penrose's idea of gravity for Orch OR collapse, I think that they may not use gravity to maintain coherent superpositions of MTs throughout the brain.

As to why I think that 1 - "... two neighboring coupled MTs (or two parts of the same MT) ..." may be too slow to be a way for ACT to communicate among MTs to maintain superposition coherence throughout the brain, Samsonovich-Hameroff et al say about that mechanism: "... global coherent oscillations are initiated spontaneously by thermal fluctuations and amplified by energy release due to conformational motions stimulated by these oscillations ...". I think that the thermal fluctuations have a time-scale, and their generation of oscillations introduces another time-delay, and the conformational motions introduce yet another time-delay, and all those time factors are much slower than graviton speed-of-light connections, and are when added up slower than the ambient thermal fluctuations that can make a superposition decohere in the way used by Tegmark in his criticism of quantum consciousness, so it seems to me that mechanism 1 is doomed to failure as a means of maintaining superposition coherence.

In my picture, each tubulin emits and absorbs gravitons (at speed of light) from every other tubulin involved in the superposition, so the gravitons keep all the tubulins in step to be in coherent superposition by exchanging gravitons much faster than the time-scale of the thermal fluctuations that Tegmark tries to use for decoherence. Since the tubulins interact much faster than the thermal fluctuations, they can easily evade any decohering effects related to the thermal fluctuations.

An analogy occurred to me. Consider the USA stealth aircraft F117. Before its development, fixed-wing aircraft were generally aerodynamically stable in that, left alone, they tended to continue on a predictable path. However, the F117 was aerodynamically unstable in that, left alone, small fluctuations of turbulence would destabilize the aircraft, and no human pilot could react fast enough to correct those instabilities, so with only a human pilot it would not continue on a predictable path (and would likely crash). Due to its instabilities, it was known to its test pilots as the "Wobblin' Goblin". Only with the development of automated computer control systems that reacted much faster than the time scale of turbulent fluctuations could such an aircraft be useful to an air force. Since such reaction time was far faster than any human reaction time, some test pilots quit because there was no way they themselves could be in "control" of the aircraft. At the risk of belaboring the obvious:

turbulent fluctuations that = thermal fluctuations that Tegmark used to destabilize the F117 argue for decoherence slow human reflexes (slower = slow processes (not much faster than than the turbulent fluctuations) thermal fluctuations) fail to stop fail to stabilize the F117 Tegmark-type decoherence fast computer control = fast processes (carried by speed-of-light systems (much faster than gravitons) allow maintenance of the turbulent fluctuations) coherence of MT state superpositions can and do stabilize the F117

In one of his papers known as a "water paper" (I downloaded it years ago, but the link I used then seems to be invalid now) Stuart Hameroff says: "... Herbert Frohlich, an early contributor to the understanding of superconductivity, also predicted quantum coherence in living cells (based on earlier work by Oliver Penrose and Lars Onsager ... Frohlich ... theorized that sets of protein dipoles in a common electromagnetic field (e.g. proteins within a polarized membrane, subunits within an electret polymer like microtubules) undergo coherent conformational excitations if energy is supplied. Frohlich postulated that biochemical and thermal energy from the surrounding "heat bath" provides such energy. Cooperative, organized processes leading to coherent excitations emerged, according to Frohlich, because of structural coherenceof hydrophobic dipoles in a common voltage gradient. ...".

That would be using electromagnetic photon processes to maintain the coherence, although they may use gravity for Orch OR decoherence. I prefer to use gravity for both things.

As to the ideas described above as Frohlich's, I see a problem with his source of energy for a driven non-equlibrium system: How can the Frohlich energy source (surrounding heat bath) produce a coherence that is stable against a decohering influence (the same surrounding heat bath) that is just as strong as the energy source ? Note that this situation is VERY different from the sun and plant life which was mentioned by Penrose in Emperor's New Mind. At page 320, Penrose says "... All this is made possible by the fact that the sun is a hot-spot in the sky! The sky is in a state of temperature imbalance: one small region of it, namely that occupied by the sun, is at a very much higher temperature than the rest. ... The earth gets energy from that hot-spot in a low entropy form ... and it re-radiates to the cold regions in a high-entropy form ...". Unlike the sun in the sky, Frohlich's surrounding heat bath source is at the SAME temperature as the rest of the brain (sky). For the brain to work like the sun and sky, you will have to find a part of the brain (sun) that is a lot hotter than most of the brain (sky). That should be easy to find experimentally, and in fact I seriously doubt that it exists, because it should be so easy to find that it would have already been found if it existed. So, I don't like the Frohlich electromagnetic coherence mechanism for maintaining brain-wide coherent states of MTs. However, I should day that electromagnetic processes are useful and may play some other roles in brain function.

I found some work of Nanopoulos et al (as to whether it is original or application of ideas of others, I do not know) to be interesting, particularly application of "... error- correcting mathematical code known as the K2(13, 2^6, 5) code. ..." to MT information.

For maintaining a coherent superposition it is indeed 'the more, the better' because if they are linked together (as in Carver Mead's book Collective Electrodynamics) anything trying to decohere the superposition must be strong enough to do all of them, not just one of them. Philip Anderson calls that collective phenomenon a Quantum Protectorate - see http://www.valdostamuseum.org/hamsmith/QuanCon2.html#quantumprotectorate

On the other hand, Penrose Orch OR collapse-decoherence is based on the time-energy uncertainty principle h = T E which gives a time T = h / E at which consciousness-collapse-decoherence takes place. The details of the actual calculation in my model are too long for e-mail but can be found at http://www.valdostamuseum.org/hamsmith/QuanCon.html#orchor and all the material on that page. Roughly, the resulting decoherence time T_N for N tubulins is T_N = N^(-5/3) 10^26 sec For example, for 4 x 10^15 tubulins (far from all 10^18 in the brain, but still a large number) the time is about 100 milliseconds which is roughly the EEG alpha frequency and for 10^17 tubulins the time is about half a millisecond.

So, the more tubulins you have the more protected they are by the Quantum Protectorate, but the faster they collapse by Penrose Orch OR so the less time you have think your thought.

There is in my model a third relevant process, which is collapse due to the quantum fluctuations of the universe at large which I call GRW decoherence. My view of GRW itself is described at http://www.valdostamuseum.org/hamsmith/GRW.html and the relationship between GRW and Orch OR decoherence is shown at http://www.valdostamuseum.org/hamsmith/QuanCon.html#timegraph The equation for GRW decoherence time is roughly T_N = ( 1 / N ) x 3 x 10^14 sec which in the chart on the link immediately above is compared with the Orch OR decoherence time of about T_N = ( 1 / N^(5/3)) x 10^26 sec

There is still a fourth relevant process that limits the size of a single brain (i.e., limits the number of tubulins N), which is based on Paola Zizzi's quantum decoherence model, which I like and so include in my model. It gives the upper limit of about N = 2^64 or roughly 10^19. For details see http://www.valdostamuseum.org/hamsmith/QuanCon3.html#quinflation

So, there are four relevant processes in my model:

- 1 - the more the better Quantum Protectorate for maintaining coherence
- 2 - time-energy uncertainty for Penrose consciousness-collapse-decoherence (the more tubulins the shorter time to decoherence)
- 3 - the time before collapse by the universe's GRW mechanism
- 4 - the Zizzi decoherence upper limit N = 10^19.

For less than about 10^15 Tubulins or so, GRW decoheres the superposition BEFORE the Orch OR decoherence takes place, so you must have AT LEAST 10^15 Tubulins in order to have consciousness based on Penrose-type Orch OR.

Given the human brain size limit of about 10^18 tubulins, the fastest that humans can think would be about 10^(-5) seconds.

As the Zizzi upper limit is about N = 10^19, the human brain has evolved to be almost as smart as a single brain can get. How could humanity get smarter? Maybe by cooperating more and fighting less. Maybe by having multiple brains (dolphins have 2). Maybe as in the movie Matrix or the Star Trek Borg by being forced involuntarily to cooperate. Of course, there is always the possibility that humanity might just stay the same, and be superseded by something else.

As to relevant experimental tests, some might be:

- 1 - experiments by Ray Chiao about gravity antennas similar to http://xxx.lanl.gov/abs/gr-qc/0204012
- 2 - Faraday cage communication experiments similar to those of Grinberg-Zylberbaum - see http://www.valdostamuseum.org/hamsmith/QuantumMind2003.html#grinzyl
- 3 - experiments with brain tissue (not limited to human brain tissue), but I don't know enough physiology to be very specific.

As to what I think of the Hameroff-Penrose article, I like parts of it (after all my model is based on some of their ideas) but I do differ a bit on some points and as to some calculated numbers. For example, Segrob Siul says that "They calculate 2 x 10^11 tubulins in superposition will reach threshold in 25 msec (40 Hz)" while my calculations (using 10^6 tubulins per neuron, or 10% of all tubulins) give:

Time Number of Number of T_N Tubulins Neurons 10^(-5) sec 10^18 10^12 5 x 10^(-4) sec (2 kHz) 10^17 10^11 25 x 10^(-3) sec (40 Hz) 10^16 10^10 100 x 10^(-3) sec (EEG alpha) 4 x 10^15 4 x 10^9 500 x 10^(-3) sec (Radin/Bierman) 1.5 x 10^15 1.5 x 10^9

I don't worry about numerical differences even of 1 or 2 orders of magnitude (factors of 10 or 100) because exact details of processes are not well known, and to me if a lot of calculations are all more or less consistent in that range it indicates to me that the overall model is OK, and it is worth the effort to make more refined versions of the model.

However, I have not done much more work in this area since 2002, because the first paper that the Cornell arXiv rejected when I was blacklisted was a quantum consciousness paper, so in a futile attempt to get off the blacklist, I started writing some more clearly (to me) plain vanilla physics papers caculating things like neutrino masses and mixing angles and the conformal gravity model that explains the Pioneer effect and the WMAP observations of the ratio Dark Energy : Dark Matter : Ordinary Matter. I even thought that if I wrote out my model in terms of string theory they might let me off the blacklist, but they did not (my string model did not have conventional 1-1 supersymmetry and had a physical interpretation of strings as world-lines of point particles, and they probably disliked that). Anyhow, I realized around 2004 that writing good plain-vanilla-physics stuff would not help, but I have gotten off into writing that kind of stuff (lately some Tquark and Higgs stuff that might be seen at the LHC, and stuff about tapping into Dark Energy with Josephson Junction arrays), and I have not yet gotten around to doing more work on refining quantum consciousness models, constructing a generalized II1 Hyperfinite von Neumann factor, etc. There is not enough time for me, working by myself, to do all these things.

As to "... why both Penrose OR and GRW are needed. ... ? ", GRW and Zizzi are not needed in a bare-bones Penrose-Hameroff model to show that human consciousness is based on quantum gravity of tubulins. I include them because I think in the context of my overall Clifford algebra generalized II1 Hyperfinite von Neumann factor physics model it is natural for them to exist, and so I should take them into account. When I do take them into account:

- 1 - the Zizzi process gives an upper bound on the size of any single conscious entity (such as the human brain or the universe during inflation immediately after the Big Bang), and that upper bound (around 10^19) is roughly consistent with our experiences and observations.
- 2 - the GRW process decoheres the superposition for less than about 10^15 BEFORE the Orch OR decoherence takes place, so for animals with less than about 10^15 tubulins in their brain, and, as Segrob Siul said "... According to [my] model ...[such]... animals cannot be conscious ..." in the sense of Penrose-Hameroff for abstract thought. However, they may have some lower level of what is thought of as consciousness even if the higher order Penrose-Hameroff Orch OR thought process is truncated by GRW, and they might have resonant connections with humans, dolphins, and other big-brain animals with high-level abstract Orch OR consciousness, and the small-brain animals may be able to form collective hive-minds that can do Orch OR abstract thought even though each individual cannot (sort of like individual human cells being too small for independent abstract Orch OR thought). In fact, such lower-level consciousness might even be present in rocks, raindrops, snowflakes, atoms, and the particles of particle physics (which in my model appear as Compton radius vortex clouds of virtual particles (for fermions, surrounding Kerr-Newman naked singularity black holes).

wish I had time to work all such stuff out in detail, but probably the best I can do is a little bit about a few things in the short time that probably remains of my lifetime.

My model is based on Penrose-Hameroff because of two original and very useful ideas of theirs: coding the information using tubulins in microtubules as 2-state quantum systems; and using quantum gravity for Orch OR decoherence. I like and use some of their related ideas, such as information theory codes (I would use quantum information theory) related to patterns of tubulin states in the cylindrical microtubules. I also like the things that I added to their model:

- 1 - quantum collective protectorate using graviton resonance to protect the coherent superposition prior to Orch OR;
- 2 - since I already had GRW in my model, it was useful in showing that Orch OR high-level consciousness is only present in brains larger than about 10^15 tubulins, and the human brain falls in that range;
- 3 - since I like Zizzi's quantum decoherence model of our universe being coherent/conscious during the inflation immediately following the Big Bang, I applied it to put an upper limit (about 10^19 tubulins) on the size of brains that can use Orch OR high-level consciousness, and found that again the human brain also fits into that range.

So, the things that I like about my model that Penrose-Hameroff does not have are the graviton quantum protectorate and lower and upper bounds for Orch OR brain size, with the human brain fitting in OK.

The things I added do not contradict basic Penrose-Hameroff, they just add more stuff to it that seems to work.

Penrose's book "The Emperor's New Mind" (ENM), is indeed very interesting. However, Penrose wrote ENM around 1990, years before the development of quantum computing, quantum information theory, and quantum game theory, which began to be developed around 1995. In his older books (Emperor's New Mind and Shadows of the Mind) did mention quantum computing, but only on the level of early ideas of possibility-in-principle due to Deutsch and Feynman and he did not then seem to take into account the major advances beginning around 1995 such as Cerf and Adami quant-ph/9512022 etc. For example, in Emperor's New Mind, Penrose said about quantum computing "... So far these results are a little disappointing ... but these are early days yet. ...".

Papers like that of Cerf and Adami at quant-ph/9512022 show that quantum information theory is very much like particle physics quantum theory, and since my model has a Clifford algebra basis for particle physics and for quantum consciousness, quantum computing seems to be to be effectively what Penrose is looking for (but had not been developed when he wrote ENM).

Penrose's latest book, "The Road to Reality" (2006), does mention quantum computers, saying that they "... would make use of the vastness of the sizes of the kinds of space of wavefunctions ..." and also mentions quantum information theory, which he calls "quanglement", and about which he says "... the precise role of quanglement in ... the circumstances under which R takes over from U ... is not yet very clear, to my [Penrose's] mind. ... A more promising connection is with some of the ideas of twistor theory, and these will be examined briefly in section 33.2 ...". In section 33.2, Penrose discusses twistors, and says "... it turns out that the conformal group has an important place in twistor theory ... We shall see more explicitly what the role of this group is in the next two sections ...". Those "next two sections" 33.3 and 33.4 say "... a 15-dimensional symmetry group - the conformal group - ... is ... Apart from the two-to-one nature of the correspondence arising from ... reversibility of the generator directions, O(2,4) ... The shortest ... way to describe a ... twistor is to say that it is a ... half spinor ... for O(2,4) ...". Since my model gets gravity from that conformal group, it is fundamentally a twistor theory, and since my model is also fundamentally a Clifford algebra connected to quantum information theory, I think that my model is a concrete example of what Penrose needs to complete his program.

......