Following up a conversation with John Amson at Rufflets Hotel, just outside St Andrews, I was led to a paper on combinatorial physics:

Ted Bastin, H. Pierre Noyes, John Amson, Clive W. Kilmister: On the physical interpretation and the mathematical structure of the combinatorial hierarchy, *International Journal of Theoretical Physics* **18** (1979), 445–488; doi: 10.1007/BF00670503

Here is the abstract. See what you make of it.

The combinatorial hierarchy model for basic particle processes is based on elementary entities; any representation they may have is discrete and two-valued. We call them Schnurs to suggest their most fundamental aspect as concatenating strings. Consider a definite small number of them. Consider an elementary creation act as a result of which two different Schnurs generate a new Schnur which is again different. We speak of this process as a “discrimination.” By this process and by this process alone can the complexity of the universe be explored. By concatenations of this process we create more complex entities which are themselves Schnurs at a new level of complexity. Everything plays a dual role in which something comes in from the outside to interact, and also serves as a synopsis or concatenation of such a process. We thus incorporate the observation metaphysic at the start, rejecting Bohr’s reduction to the haptic language of common sense and classical physics. Since discriminations occur sequentially, our model is consistent with a “fixed past-uncertain future” philosophy of physics. We demonstrate that this model generates four hierarchical levels of rapidly increasing complexity. Concrete interpretation of the four levels of the hierarchy (with cardinals 3,7,127,2^{127}-1∼10^{38}) associates the three levels which map up and down with the three absolute conservation laws (charge, baryon number, lepton number) and the spin dichotomy. The first level represents +, −, and ± unit charge. The second has the quantum numbers of a baryon-antibaryon pair and associated charged meson (e.g.,n^{–}n,p^{–}n,p^{–}p,n^{–}p,π^{+},π^{0},π^{–}). The third level associates this pair, now including four spin states as well as four charge states, with a neutral lepton-antilepton pair (e^{–}e or v^{–}v), each pair in four spin states (total, 64 states) – three charged spinless, three charged spin-1, and a neutral spin-1 mesons (15 states), and a neutral vector boson associated with the leptons; this gives 3+15+3×15=63 possible boson states, so a total correct count of 63+64=127 states. Something like SU_{2}×SU_{3} and other indications of quark quantum numbers can occur as substructures at the fourth (unstable) level. Breaking into the (Bose) hierarchy by structures with the quantum numbers of a fermion, if this is an electron, allows us to understand Parker-Rhodes’ calculation of *m*_{p}/*m*_{e} =1836.1515 in terms of our interpretation of the hierarchy. A slight extension gives us the usual static approximation to the binding energy of the hydrogen atom, α^{2}*m*_{e}*c*^{2}. We also show that the cosmological implications of the theory are in accord with current experience. We conclude that we have made a promising beginning in the physical interpretation of a theory which could eventually encompass all branches of physics.

My first reaction is something along these lines. Pythagoras is thought to have believed that “all is number”, and also to have believed in reincarnation. (Of course, we know nothing about what he really believed.) So perhaps these authors are channelling the spirit of Pythagoras.

Note also that *Schnur* is German for “string”, as claimed, but is also defined by the Urban Dictionary as “the ultimate insult that means absolutely nothing”. Interesting?

Actually reading the paper didn’t clear up all my doubts. The setting is sets of binary strings. A set of strings is called a *discriminately closed subset* if it consists of the non-zero elements in a subspace of the vector space of all strings of fixed length *n*. Such a subset has cardinality 2^{j}−1, where *j* is its dimension. Now a step in the *combinatorial hierarchy* involves finding a set of 2^{j}−1 matrices which are linearly independent and have the property that each of them fixes just one vector in the subset (I think this is right, but the wording in the paper is not completely clear to me). These matrices span a DCsS of dimension 2^{j}−1 in the space of all strings of length *n*^{2}.

Of course, the exponential function grows faster than the squaring function, so the hierarchy (starting from any given DCsS) is finite. Their most important example starts with *j* = *n* = 2 (two linearly independent vectors in {0,1}^{2} and their sum), and proceeds to *j* = 2^{2}−1 = 3, *n* = 2^{2} = 4; then *j* = 2^{3}−1 = 7, *n* = 4^{2} = 16; then *j* = 2^{7}−1 = 127, *n* = 16^{2} = 256; then *j* = 2^{127}−1 ∼ 10^{38}, *n* = 256^{2} = 65536 (but this is impossible, so I am not sure what it means to continue the hierarchy to this point).

They point out that the numbers 127 and 2^{127}−1 are close to, respectively, the fine structure constant and the ratio of strengths of electromagnetic to gravitational force between protons. If you use cumulative sums, you get 3, 10, 137 and a number which is again about 10^{38}, and of course 137 is even closer to the target. But I am not sure why you should do this, and in any case, the fine structure constant is measureably different from 137. The non-existence of 10^{38} linearly independent matrices of order 256 supposedly has something to do with “weak decay processes”.

The paper contains some constructions of the appropriate hierarchies. One could pose the mathematical question: do the required linearly independent non-singular matrices exist for any *n* and *j* for which 2^{j}−1 ≤ *n*^{2}?

By this stage I was floundering, so I gave up my careful reading. I noted at a certain point a calculation of the ratio of proton mass to electron mass giving a value 137π/((3/14)×(1+(2/7)+(2/7)^{2})×4/5), agreeing with the experimental value to eight significant figures. (There are three terms in the geometric series because the hierarchy falls over at step 4.) Of course, putting in a more accurate value for the fine structure constant would make the agreement less good: the authors do attempt to explain this.

(If you are a mathematician looking at the paper, the mathematics is put in an appendix starting on page 480, so you can avoid the physics.)

At another point in the conversation, John told me that he ran with the fox and hunted with the hounds. On the strength of this, I can perhaps attribute to him a certain degree of scepticism about all this.

Hi Peter. My background is retired Analyst/Programmer with a Postgrad. Dip. in Computing and a long standing interest in Physics. I’ve been working on these papers from Noyes for over four years now building computer programmed models of Program Universe and investigating the theory extremely thoroughly, having been amazed to see 1/137 “explained”. Unfortunately I have found that it is not yet explained; but I guess that leaves me with something to pursue!

There is a major flaw in Noyes’/Bastin’s papers. Any run of Program Universe as they have defined it does NOT produce Tables which represent such a Combinatorial Hierarchy (3,7,127). At 256 bits it produces on average a basis for the DCSS of dimension 254 (rather than anywhere near 127) and is approximately Gaussianly Distributed with Standard Deviation around 3. This particular structure (3,7,127) has a probability lying over 42 standard deviations below the average. Thus extremely unlikely.

A quick answer to your questions. The limit they propose of n x n matrices is absurd. Yes, there does exist a binary matrix of 256 by 256 having 256 x 256 elements. It is not n x n, but n x 2^n AVAILABLE matrices which grows much, much faster that 2^n – 1. I don’t understand how these people can make this fairly simple error in counting! So we have a Universe Table of 256 bit strings. We construct a 256 x 256 bit binary array with determinant 1 or -1. There is no limit to the hierarchy as Noyes,Bastin, Rhodes suggest. Any Table comes with a Closed Portion(topologically closed meaning in this case that it contains all its elements including zero) on the left and an Unclosed Portion to the right. This number may be anything between 2 and n but averages around a half the length.

His (Noyes’) probability arguments make no sense to me. Basis sizes multiply, they don’t add, so to calculate the Fine Structure Constant he ADDS 3 + 7 + 137 and then says one of these in 137 will be a fermion/photon interaction. The dimensions should be MULTIPLIED (being a Direct Product of Finite Cyclic Groups) so 1/(3x7x127) should be his probability! This argument is just plain wrong as a probability calculation. How can a mainstream qualified working physicist make such an elementary probability calculation error???

Yes we can always find j independent strings such that 2^j -1 <= n x 2^n.

It is almost self-evident: We simply take each bit position in the string and form a unit in the string Ie: {1,10,100,1000,1000,…2^(n-1)}. These may be combined under exor to form any of the 2^n – 1 possible non-zero bit strings. So we can take any j of these and they are a Linearly Independent basis set under Exor. Thus any Universe Table will have a basis of a random size j where j ranges from 2 to n in a reversed Poisson Distribution. (Actually it looks more like a reversed Black Body Spectral Intensity/Wave Length curve having a sharp cutoff at high probability on the right).

Most of my work has been focused on finding a good mathematical model to extrapolate to large bit lengths. My computer starts to complain miserably about not being able to assign gigabytes of Heap-Space over 32 bits then goes into a sulk for days. We are dealing with literally astronomical numbers; we are dealing with Power Sets of possible Tables thus at 32 bits we have 2^2^32 = 2^4294967296 possible tables. Obviously individual tables can't be counted or classified by experiment and I limit my experiments to about 24 bits at most but generally I stick around 16 bits.

My Average Basis Size Working Extrapolation is B(i+1) = (Ni + B(i))/2), N(2) = 1.5 (ie: 50/50 1 or 2)

A significant (from a Quantum Physics view) Basis Size is 240 = 2^4x3x5 which lies within 5 sd of the average 254. Standard Gauge Theory is based on group O(1) x SU(2) x SU(3). The Basis Size factors create a composite Cyclic Group Structure of Z(2)xZ(2)xZ(2)xZ(2)xZ(3)xZ(5). These cyclic groups form the Central Groups (commutative,anticommutative) of SU(2) and SU(3). O(1) is the Unit Circle Group of planar rotations and is implicit in every SU(n) group.

This (240) is the only number near 254 which factorizes in a way consistent with the Standard Model Gauge Theory.

Is this post too long? :}

In summary:

The Combinatorial Hierarchy does not terminate at level 4; it just becomes less and less probable that effects will be noticeable without a galaxy sized counting instrument coupled to a planet sized computer!

The correspondence between Feynman Diagrams and Program Universe is truly (I think) the hugest thing in physics. In one way or another I have been working on Finite Discrimination for over 40 years and I am grateful to Rhodes, Anson, Noyes, Bastin etc for their inspiration despite some of the glaring errors. Noyes' knowledge of physics is terrific but his mathematical methods are quite suspect. You can see though how a tabular Label/Content interaction scheme would behave like a discrete quantum field permeating space.

Counting is Fundamental!

Les Green – ZOS

A simple proof of the existence of a basis with size j is that any square matrix of size n x n and non-zero determinant is diagonizable to n trace elements thus we can “undiagonalize” the trace of any binary square matrix to produce any set of independent strings. IE: All bases of the same size are isomorphic!

From your post: “Now a step in the combinatorial hierarchy involves finding a set of 2^j−1 matrices which are linearly independent and have the property that each of them fixes just one vector in the subset”. No, the step involves finding ONE (rather then 2^j-1) square binary matrix which recognizes whether a string is a member of the basis or not (eigenvalue 1 or 0). The claim is that no such matrix can be found beyond 256 bits because there “aren’t enough candidate matrices” is mathematical nonsense. nxn is the number of elements in the matrix in question, not the number of binary matrix candidates available. It is provable that if there is an orthonormal basis then the required “recogniser” matrix must exist. What they are saying (incorrectly) is that beyond 256 bits you can’t find an orthonormal basis which is self-evident rubbish!

You may have gotten the impression that I find Noyes, Rhodes, Bastin et al a load of nonsense but this is definitely NOT the case. There are several glaring mathematical errors (illogical, unclear or wrong) but these aside, the material is well worth the study from a physics point of view. It explains (a priori) a gently inflating universe and phases of universe evolution such as baryogenesis, confinement and structural constants in the general case. My brother and I believe it is a new and valid approach to quantum physics and indeed any structure based in finite discrimination.

Having read through my post I find I have been uncharacteristically rude and unprofessional. I apologize to you Peter and to Noyes for my remarks on what I suggested were ‘elementary errors’.

These remarks were uncalled for and it may be that I misunderstand.

The proof of the existence of the Linear Independence Operator (n x n square matrix which has an Eigenvector Set matching the orthonormal basis) is given by Ted Bastin in one of his papers. Unfortunately I don’t have the citation to hand.

Further thought has shown me that what I believe they intend to do is define a fixed basis of a particular size. Since a physicist cannot measure the whole universe, it is not possible to count the basis vectors as a programmer can do on a finite table. They are entitled to select a basis size according to the limited known distinctions (chirality,charge,colour,4-momentum). This, in effect, means ignoring subtler distinctions that they can’t yet experimentally observe.

What I draw from Noyes’ and Bastins’ (et al) papers on the subject is:

We can consider the simplest fermionic structure to be the Neutrino with 4 binary distinctions:

Program Universe starts with the simplest structure (Neutrino) and elaborates this in a fixed way. Noyes describes this as “the structure developed at the previous level is taken up (in some way) to the next level”. (I’m quoting from memory so this may not be verbatim!)

The neutrino from the first level is elaborated at the second level into what may be described as a composite of 4 neutrinos and the distinction of charge is added. At the third level the neutrino + electron structure is taken up and splits Charge into two (2/3,1/3) and adds distinctions: Up/Down, Particle/Antiparticle, Colour(r/b/g) thus constructing the Quark Structure.

We get a basis pattern for the Fermionic Labels interpretable as:

Generation(1,2,3) Neutrino(L/R),Electron(L/R|+/-),Quark(L/R|+/-|2:3/1:3|P/A|U/D|R/B/G)

If we identify the basis vectors so formed with fermions, we find that all composite labels are either fermions or combinations of fermions. An even combination of fermions is what a physicist calls a boson and is of course a fermion changing state. Odd combinations of fermions may be considered to be fermion + boson ie: an excited state of the fermion.

The Labels represent exactly the Feynman 3-leg and 4-leg diagrams and in addition the complete bit-string represents a 4-momentum which is conserved by the fundamental logical operation of Exclusive-OR (addition modulo 2).

In the various papers (particularly Bastin’s) the statistical calculations on Universe Tables are shown to model the Lorentz Transformations in particular and in general Linear Transformations.

In conclusion it is my opinion that Noyes’ (et al) proposed model based on Program Universe (Combinatorial Hierarchy) is an accurate model for our physical universe. If this is so then the theory provides ‘a priori’ methods to calculate all structure constants as probabilities including that of the decay channels. Should these fail to agree with experiment then the theory will be found to be wrong. IE: The theory is falsifiable and thus scientifically valid as a theory.

I have a long way to go yet in understanding these Tables fully as a model of the quantum universe but I’m making progress.

My comment on the 27/8 where I say: “No, the step involves finding ONE (rather then 2^j-1) square binary matrix which recognizes whether a string is a member of the basis or not (eigenvalue 1 or 0). ” is incorrect. *Wipes egg off face again!* I haven’t visited that particular section for a while and on re-reading I find I’ve miss-remembered. It is one matrix with one eigen vector for each of 2^127-1 strings ie: as you understood it. However the counting problem still remains. At the so-called final level of 256 bits we have 256×2^256 = 2^261 binary square matrices (not 256×256 = 2^16 as stated in the relevant papers) required from which to construct 2^127 – 1 independent basis strings of length 256×256 by unfolding the matrices. The last time I counted them 256×2^256 = 2^8 x2^256=2^261 which is very much more than 2^127 – 1: roughly 2^134 times larger. So the Combinatorial Hierarchy does not stop for the reasons given, if it stops at all. It is ironic that what attracted me in the first place besides the explanation of the curious number 137 was the “fact” of an algorithm naturally stopping after a fixed size. It is not a fact though and I wish someone would give a better explanation than has been given. The physical evidence is that: either the distinctions beyond 256 bits are too fine for us to measure; or there are no distinctions beyond 256 bits and that would require the Hierarchical Algorithm to stop naturally. At the moment, without further evidence to the contrary, I think the former pertains.

The reason it stops is NOT because there are not enough matrices. This is the claim made in the paper “On The Physical Interpretation Of The Combinatorial Hierarchy – Ted Bastin and H. Pierre Noyes – Dec 1978”; quoting verbatim:

“The process [basis construction] will terminate if n^2(L) < 2^j(L) -1 since at level L there are only n^2(L) linearly independent matrices available (and not all non-singular)."

The vector space of bit strings of length J forms a Vector Space spanned by a basis of independent unit vectors {2^0,2^1,2^2,…2^(J-1)} and thus we can guarantee the existence of J independent square binary matrices of dimensions JxJ whose eigenvectors are these unit vectors. These can be found by un-diagonalizing the unit matrix as I mentioned in a previous post. So the claim is completely false.

The reason the construction stops is that there are not enough bits in 256 x 256 to create a 2^127 -1 bit string; ie: 256×256 bits =2^16 bits << 2^127 – 1 bits. The key understanding is that an independent basis of size k requires strings of at least k bits. So I can now shout Hallelujah! The particular Combinatorial Hierarchy in question does stop naturally which is somewhat useful to the particular theory of Noyes et al. So Noyes, Bastin and Amson's papers are correct where they rely on the basis construction stopping at the fourth level but not for the reason they give in that particular paper and others too numerous to mention.

We can see that we could continue the construction if we used a square matrix of at least 2^127-1 bits to construct the fifth level for example but this represents a different Combinatorial Hierarchy in which the number of bits is chosen to exactly cover the maximal DCss (strict subset) at that level. Thus the sequence of bit lengths would be 2,3,7,127, 2^127 – 1, 2^(2^127 – 1) – 1,… rather than 2,4,16,256, stop.

There seems to be no a priori reason to prefer one sequence to another and indeed we see the same crucial number 137=3+7+127=(2^2-1)+(2^3-1)+(2^7-1) embedded quite naturally in the alternative sequence which does not terminate. Since the probability calculations are not affected by the particular bit length chosen to represent the vectors (provided they are long enough to represent the basis vectors) but only the cardinal size of the basis set, these two hierarchies give the same probability calculations if we ignore the smaller terms from the second; but as the second does not stop I can only conclude that there are an unlimited number of structural coupling constants and thus forces but that they are so weak they have as yet remained undetected [the next would be 2^(2^127 – 1)]. These of course would provide an explanation and calculation for Dark Matter/Energy if the theory is a correct model for quantum physics and provide corrections to the coupling constants already calculated by Rhodes, Noyes, Bastin, Amson et al. The failure of the Combinatorial Hierarchy to stop may not be crucial at all but instead, explanatory!

I conjectured there to be a mathematical identity between the 4 levels of the combinatorial hierarchy and the 4 normed division algebras.

http://math.stackexchange.com/questions/216904/normed-division-algebras-and-combinatorial-hierarchy