Quantum Mechanics without Observers
Abstract
The measurement problem and the role of observers have plagued quantum mechanics since its conception. Attempts to resolve these have introduced anthropomorphic or nonrealist notions into physics. A shift of perspective based upon process theory and utilizing methods from combinatorial games, interpolation theory and complex systems theory results in a novel realist version of quantum mechanics incorporating quasilocal, nondeterministic hidden variables that are compatible with the nohidden variable theorems and relativistic invariance, and reproduce the standard results of quantum mechanics to a high degree of accuracy without invoking observers.
pacs:
03.65.Ta, 03.65.Ud, 02.10.De, 02.30.Px, 02.40.Ul, 02.50.LeI Preamble
In spite of nearly a century of intense and profound theoretical and philosophical effort the measurement problem remains one of the outstanding problems in the foundations of quantum mechanics Wheeler , raising questions about the nature of reality itself Norsen ; Hemmick . Its resolution would not only be a major achievement in quantum foundations but might also resolve many outstanding paradoxes such as Hardy’s paradox, Schrodinger’s cat, delayed choice, and perhaps provide new insight into quantum gravity and the problems associated with the many infinities of quantum field theory.
In this paper I propose a realist, quasilocal, nondeterministic, time directed, hidden variable model without observers which is relativistically invariant, avoids the nohidden variable constraints, yet is capable of reproducing all of the results of nonrelativistic quantum mechanics (NRQM) to a high degree of accuracy. It is at least theoretically testable. In this model, quantum mechanics emerges as the asymptotic limit of an inherently unobservable and intrinsically discrete lower level dynamics.
Many approaches have been taken to resolve this problem, varying in the degree to which they consider the wave function as complete, incomplete, ontic or epistemic spekkens . The wave function may be considered as complete and ontic, in which case additional assumptions are incorporated into the quantum mechanical formalism such as multiple universes, multiple times (stochastic quantization), nonlinear terms (continuous spontaneous localization), noise terms, decoherence. It may be considered complete and epistemic as in the consistent histories approach. It may be considered incomplete and an additional set of ‘hidden variables’ introduced (Bohmian mechanics) or a lower level dynamics such as cellular automata wolfram or causal sets sorkin is added. There have also been attempts to reformulate quantum mechanics in different mathematical languages Coecke or using nonKolmogorovian probability theory Khrennikov .
Since the seminal writings of Bohr, the measurement problem has generally been conflated with questions about the role of observers, and whether or not a realist model of quantum mechanics is possible. In the absence of an ultimate level of reality, quantum phenomena appear not to possess definite properties without the intervention of an observer. This brings a rather discomfiting aspect of anthropomorphism into physical discourse. To eliminate observers requires the assertion of an underlying reality, but Bell’s research appears to show that any such reality must possess rather odd features, such as nonlocality. This paper explores the possibility that the measurement problem and the many paradoxes of quantum mechanics arise because of the unfortunate choice of functional analysis on Hilbert spaces as the representational setting for its theory. A shift to a different representational system can eliminate these problems while still maintaining a realist metaphysics.
The approach taken here is based upon process theory Whitehead ; eastman , archetypal dynamics and emergence theory Sulis , combinatorial game theory Conway , forcing hodges and interpolation theory Zayed . It proposes a Planck or subPlanck scale dynamics from which quantum phenomena emerge at larger scales. It assumes that states yield bounded measurements and that wave functions are bandlimited. Modeled upon Whitehead’s ‘actual occasions’, its primitive elements emerge into existence apparently possessing spatiotemporal extension and definite properties, then fade into nonexistence while leaving an informational trace than can be carried forward by subsequent elements. Physical entities are viewed as emergent, informationally coherent patterns of primitive elements.
The core idea is derived from Whitehead’s process theory, and considers all physical phenomena as emergent from an ultimate reality of information laden entities called actual occasions. The basic postulates are that

everything in reality is generated by process,

everything that we observe is emergent from an ultimate lowest level that itself is inherently unobservable,

individual events of reality come into existence in a discrete but nonlocalized form, and what we observe is a diffuse avatar, which extends over space and time and constitutes a wave function,

this wave function is an emergent effect but it is through such wave functions that observable physical phenomena arise.
Reality possesses two aspects  actual occasions, which are the primitive experiential elements, and processes, which generate the actual occasions. Manifest actual occasions in turn influence which processes are active and interacting. All physical entities are emergent from these actual occasions. This poses intrinsic limits on the observational abilities of these entities. Individual actual occasions cannot be observed, in general. Process too can be inferred but not directly observed. Processes may be ascribed definite informational parameters which influence the nature of the actual occasions that they generate, which in turn influence the properties ascribed to the resulting emergent physical entities. Processes generate the actual occasions that manifest space and time, but they themselves are to be viewed as having an existence which stands outside of spacetime. The quantum nature of reality is a consequence of the discreteness manifesting at the fundamental level while the wave nature of reality is a consequence of the inability to resolve these fundamental events in space and time, thus physical reality acquires emergent wave like aspects.
Process theory places emphasis upon three important characteristics:

the unfolding of process in the manifestation of actual occasions is strongly determined by the context generated by all participating processes and by the dynamics of their interactions

observable aspects of physical reality  continuity, spacetime, physical entities, symmetries  are emergent

processes interact through multiple forms of linearity
It is commonly asserted that there are three fundamental differences between quantum phenomena and classical phenomena. Quantum mechanics demonstrates:

the quantization of exchange in interactions

the existence of NonKolmogorovian probability

the existence of nonlocal influences
A major goal of this paper is to show that these features may be true of classical systems as well, and their putative absence is actually a reflection of the mathematical languages that have been used to describe the classical and quantum worlds rather than being properties of those worlds in themselves. There are alternative formulations of dynamics based on iterated function systems and combinatorial games that can describe the classical world and yet manifest these socalled quantum aspects. An argument is given for an expansion of the language used to describe fundamental processes beyond that of functions and functions spaces to include iterated function systems and combinatorial games. In so doing, many of the apparent paradoxes of quantum mechanics disappear without the need to resort to anthropomorphism, solipsism, and bizarre metaphysical constructs, and more importantly, without the need to abandon realism.
Ii Introduction
Iii The Measurement Problem
In physics a physical system, at each point along its world line, is assumed to exist in a state, and this state determines all past and future states (at least in the absence of interactions with other systems). In classical physics it is assumed that this state can be described, usually in the form of a set of parameters linked to physical constructs such as position, linear momentum, angular momentum, mass, energy, etc. and that a knowledge of the values of these parameters for a given state suffices to predict these values for past and future states. Such a description of a state is said to be complete, since no additional information is required to enable these predictions to be carried out. In classical physics these parameters are assumed to be intrinsic ontological characteristics of the system itself, independent of any other system. It is further assumed that there exist particular systems termed measurement apparatus that are capable of interacting with the system in such a way as to manifest the values of these parameters without inducing any change in the system. Moreover, it is assumed that all such parameters can, in principle, be simultaneously measured so that it is possible, at least in principle, to actually obtain this complete description of the state of a system. In classical physics it is possible to assert that a physical system is, and that it has these properties as defined by these parameter values.
The development of quantum mechanics raised serious questions about the validity of these ideas. The discovery by Heisenberg of the uncertainty relations appeared to place fundamental limits on the degree of accuracy with which measurements could be simultaneously carried out. It appeared to be impossible to determine a complete set of parameter values attributable to a given state. Bohr suggested that measurements could not in principle be separated from the conditions under which they were obtained. He wrote (Bohr, , pgs. 73,90)
every atomic phenomenon is closed in the sense that its observation is based on registrations obtained by means of suitable amplification devices with irreversible functioning such as, for example, permanent marks on the photographic plate, caused by the penetrations of electrons into the emulsion… the quantum mechanical formalism permits welldefined applications referring only to such closed phenomena and must be considered a rational generalization of classical physics.
Fundamental entities appeared to manifest contradictory properties, on the one hand appearing to be distributed in spacetime while on the other hand manifesting effects that appeared to be localized  the socalled waveparticle duality. Properties appeared to occur discretely in some cases, continuously in another, such as energy levels in bound and free particles respectively. It appeared possible to create identical copies of systems that on measurement yielded different values for some parameters, such as occurs in superposition states.
All of these features depart profoundly from the classical conception. In quantum mechanics it appeared that the best one could do was to ascribe a probability distribution for the results of any measurement procedure applied to a quantum system. The notion of a state changed from being a complete description of the parameter values to being a complete specification of the probability distributions associated with measurements of these values. While classical physics could assert that at the lowest level of reality there existed definite somethings that possessed definite properties whose values were determined by measurements, quantum mechanics could only assert that certain experimental arrangements yielded stable probability distributions of measurements but the status of any underlying ultimate reality remained in doubt. A probability distribution is not a thing in itself but rather is a description of frequencies of occurrences of things. But it is not clear in quantum mechanics who owns these things. Are they actual properties of the physical system being measured? If so then ones encounters a decidedly nonrealist view of reality since quantum systems can manifest multiple values of the same property while having indeterminate values of other properties. Or are these socalled properties merely outcomes of certain formalized types of interactions so that they cannot be attributed to either the quantum system or the measurement apparatus but only to the context of the interaction, as Bohr has suggested? But if so, then in what sense does the quantum system possess reality? What is its ontological status when it is not in a measurement interaction? How do two quantum systems interact and what determines such interactions if not these properties and if it is these properties then how do the quantum systems determine what they are in the absence of a measurement interaction?
Problems associated with the ontological and epistemological status of quantum concepts have plagued the theory since its inception and continue to this day. The Copenhagen interpretation of Bohr emphasized the contextuality of measurement. Each measurement was to be understood in the context of a quantum system interacting with a classical measurement apparatus, and served to link certain aspects of quantum systems to classical measurement constructs. Unfortunately the pernicious tendency to ignore the emergent and contextual nature of classical properties and to view such properties as intrinsic to entities led to the persistent notion that quantum properties do not exist unless there is an observer and an observation to manifest them.
It is this feature that led to Wheeler’s famous dictum that “No elementary phenomenon is a phenomenon until it is a registered (observed) phenomenon”(Wheeler, , pg. 184).
Manifestly though, reality exists without observers. The universe existed long before there were living entities to observe it, and why should living entities possess such a privileged status when they are physical entities like all other physical entities?
There are two very specific problems associated with the notion of measurement and that constitute the socalled measurement problem. The first is the problem of wave function collapse.
Nonrelativistic quantum mechanics posits that the state of a physical system is describable by a wave function which is governed by the Schrodinger equation
where is an operator version of the classical Hamiltonian function for the system. is a linear operator, which is significant because it means that if and are both solutions of the Schrodinger equation then so is for arbitrary complex weights .
Measurement is described entirely differently. To each measurement situation there corresponds an operator, and the possible results of a measurement are given by the eigenvalues of this operator, i.e. those values which satisfy the equation
Suppose that is a solution corresponding to eigenvalue and is a second solution corresponding to eigenvalue . Then the sum is a solution, but what value is ascribable to a measurement of such a system? Applying to the sum yields
The sum is not an eigenvector of so it would appear that no measurement can be ascribed to the superposition. In fact, if such a superposition is measured it does yield definite measured values, and but these appear randomly with frequencies proportional to and respectively.
The time evolution of a quantum system is determined by a (linear) unitary operator, so that the time evolution of a superposition under Schrödinger evolution always remains a superposition  so how can it ever yield a definite measurement? This was a problem recognized by von Neumann vonNeumann and later discussed by London and BauerLondon . The long held conventional solution is that somehow as a result of a measurement the wave function undergoes a discontinuous transition from the superposition to one of the eigenstates that comprise the superposition. This transition occurs probabilistically with transition probabilities given by the norm of the respective eigenfunction.
There is nothing, however, in the Schrödinger form of quantum mechanics that allows for such a ‘collapse’ of the wave function. It appears neither in the Schrödinger formulation nor in the measurement postulate. It must be added separately. Thus this is an ad hoc assumption that to date lacks any physical explanation despite many attemptsGhirardi Everett Griffiths to give it one, including introducing decay factors into the wave function, nonlinearities into the Schrodinger equation, positing the existence of quantum splitting and multiple universes, or the presence of superselection rules that limit solutions.
The notion of measurement per se is a classical construct, founded upon the behaviour of classical systems in interaction with measurement apparatus. There is no reason a priori that this classical measurement notion should apply to quantum systems. Moreover, unless one is a dogmatic reductionist, there is no reason a priori to believe that the laws of quantum mechanics should apply to classical systems, particularly given the emergent nature of classical systems. These are two realms which, when separated by sufficiently large spatial and temporal scales, manifest distinct features and possess quite distinct mathematical descriptions. Their effective theories are quite distinct. Indeed, the archetypal dynamical perspective described in a later section begins with the understanding that the classical and quantum realms constitute distinct categories of reality and require distinct frames of reference for their interpretation and to formulate interactions within them. The quantum mechanical notion of measurement is an attempt to bridge the quantum and classical realms. As Bohr repeatedly emphasized, the idea of measurement is a classical construct and the result of a measurement is a classical object, to whit, a mark, that can be apprehended by a human observer. The first problem thus arises due to an attempt to apply classical concepts in a topdown fashion to the quantum realm. The second problem arises from an application of the reductionist paradigm and the attempt to apply quantum concepts to the classical realm. This second aspect arises from the attempt to treat the measurement apparatus as a quantum mechanical object. As London and Bauer demonstrated, the assumption that the measurement apparatus has a description as a quantum mechanical entity simply results in a more complicated superposition of states, now including both the system and the measurement apparatus. The entire coupled system now evolves according to a more complicated wave equation, which, by virtue of its linearity, requires an observer to collapse it. Incorporating this new observer into the situation leads to an infinite regress, a reductionist nightmare.
Shimony showed (Shimony, , pg. 34) that even with approximate measurements, the quantum state of a system will not undergo a transition to an eigenstate under the usual Schrodinger dynamics.
The apparent necessity for some nonphysical process to intervene in order to collapse the wave function has led to several schools of thought. One school proposes that reality requires the active participation of an observer to make it manifest. This introduces an element of psychology (and sometimes mysticism) into physics. A second prominent school of thought suggests that the wave function is a mathematical convenience which enables the calculation of various correlations and functions but which has no counterpart in reality. Some view the wave function as reflecting our ignorance of knowledge of a system, but who’s ignorance and why is every observer’s ignorance exactly the same? There is an alternative view, however, that treats the wave function as if it were an actual physical wave, with a physical phase Mann . Indeed a recent paper Barrett has proposed a theorem which appears to demonstrate that the wave function must be real in order to obtain the standard quantum mechanical results. But in that case what does it represent? A distribution of probability? Probability of what? How can a probability be physical? A distribution of mass? Of charge? How does one reconcile that with observed pointlike nature of fundamental particles? If the wave function is real, then how to account for the observed quantum nature of quantum interactions? Energy is exchanged in quantum packets but how is this to occur if there is a physically real wave function distributed over space and time? How is it that only one part of the wave contributes to the transfer of energy when both systems have waves that are equally distributed? Why, if the wave is in a superposition of individual eigenfunctions, does the result of a measurement leave the wave in a single eigenfunction? How does such a transformation occur? How can a particle have multiple energies yet transfer only a specific energy?
A less prominent school of thought is that of realism, which holds that at the level of ultimate reality there are actual entities and that quantum mechanics is not the final theory, but needs to be supplemented by a deeper theory of socalled hidden variables. Debates about such hidden variables go back to Einstein and Bohr and continue today in questions concerning whether or not hidden variable models are even capable of reproducing the results of quantum mechanics.
Iv Realism, Hidden Variables and Nonlocality
The heart of the problem lies in the question whether the elements of NRQM describe physical reality or are mathematical conveniences that simply enable calculation. This debate has raged since the time of Einstein and Bohr and was thought to be resolved in the famous hidden variable theorems of Bell Bell and their subsequent experimental verification beginning with the experiments of Aspect Aspect . Realist models of quantum mechanics have been around since De Broglie first proposed the idea of a ‘pilot wave’, later developed by Bohm into a fully realized ontological model Bohm . The basic idea is to rewrite the wave function as a product, where R and S are subject to the differential equations
which can be understood as a HamiltonJacobi equation for a particle with momentum moving normal to a wave front under an additional quantum potential while
describes the conservation of probability for an ensemble of such particles having a probability density .
Bohm interprets these equations as representing the motion of a definite particle coupled to a quantum field given by (which satisfies the Schrodinger equation). The equation of motion of the particle is given by
The quantum potential does not depend upon the amplitude of the wave in the sense that multiplying the wave function by a constant does not change the value of the quantum potential. It depends only upon the form of the wave. A similar situation pertains in signal theory where the information content of a signal depends upon its form but not upon its intensity. Bohm and Hiley make the important point that the quantum potential acts upon the particle not as a form of energy but rather as a form of information. Such information is not to be understood in the Shannon sense as a reduction of uncertainty but rather in the semantic sense as providing knowledge about the physical environment within which the particle is moving. This usage of information is uncommon in physics and engineering but virtually universal in most other fields of human endeavor.
Bohm’s theory is generally described as a hidden variable model although as Bohm himself points out there are actually no hidden variables in the model. Rather the most important aspect of Bohm’s model is that it provides a realist ontological interpretation of quantum mechanics in its assertion that actual particles exist whose motion is guided by the addition of the quantum potential which imparts a stochastic character to their motion as a result of nonlinear effects.
This quantum potential is decidedly nonlocal in its effect which appears to occur instantaneously. This apparent violation of relativistic constraints has limited acceptance of Bohm’s approach but to be fair nothing is being stated as to the ontological character of this quantum potential. While relativity limits the speed at which any signal may propagate there is no suggestion that the information provided by the quantum potential propagates as a signal.
The issue of nonlocality is often raised in association with hidden variable models of quantum mechanics. This will be discussed below but hidden variables are not necessary for nonlocality to arise as an issue in quantum mechanics. It appears even in nonrelativistic quantum mechanics in the case of entanglement. For simplicity consider the case of photon pair production through parametric down conversion. Photon polarization gives rise to two possible states and . Pair production results in an entangled state of the form Zeilinger . Shimony has called this a non controllable nonlocality, meaning that it cannot be used to send a signal between two spacelike separated observers. This stands in distinction to a controllable nonlocality, which would violate the relativistic constraint. Nonlocality appears to be a feature of the quantum realm regardless of whether or not hidden variables might appear at some lower level. . Measurement of the polarization of one photon automatically guarantees the state of polarization of the second photon, no matter how far the two photons are separated spatially. Experiments have demonstrated that the speed with which this transfer of information occurs is well in excess of the speed of light
Bell Bell considered the possibility of there being a set of hidden variables from which quantum mechanical correlations might be derived. In his formulation he did not require that these hidden variables be deterministic. In fact he explicitly stated “It is a matter of indifference in the following whether denotes a single variable or a set, or even a set of functions, and whether the variables are discrete or continuous.”(Bell, , pg. 15). The issue of whether or not these hidden variables are deterministic is not germane to his argument. In fact in proving his result it is necessary to assume that these variables are distributed with some probability distribution . Whether this distribution arises due to ignorance on the part of the observer or some inherent stochastic dynamics is irrelevant.
There is a subtlety here which has been discussed in detail by Bunge Bunge . There is a notion of determination, which refers to how physical entities acquire their properties, and the mechanisms of such determination, which may be deterministic, nondeterministic or stochastic. The realist view is an assertion that the properties of physical entities are determined in advance, regardless of the mechanism. Hidden variables provide a vehicle to explain how such determination arises. The nonrealist view is an assertion that these properties are not determined in advance but arise solely out of the interaction with some observer. In physics, the notion of determination is often conflated with the notion of a deterministic mechanism but that is an error according to Bunge. The requirement for deterministic mechanisms is more ideological than ontological since the discovery of deterministic chaos has blurred the boundary between deterministic and stochastic mechanisms. The real issue is whether or not at the lowest levels of reality there exist actual entities which can be ascribed values given by these hidden variables and which subsequently determine physical entities.
In his argument Bell considered a pair of spin entangled particles . Assume that spin measurements are made in the directions and that along these directions the spins take the values . Let be the expectation value of the product of the measured values for in the direction and for in the direction . Let be a third direction. Bell assumes first of all that, given some probability measure on the set of hidden variables, the expectation value of the product is calculated as
where gives the value measured in system at angle .
Bell showed that, if an additional assumption of locality is made (meaning that the expectation values of one system must be independent of both the angles measured and the measurement received in the other system), then the following inequality holds
Nonlocality plays a central role in providing constraints upon the nature of hidden variables that might be involved in determining quantum phenomena. Since Bell’s original work, two specific forms of nonlocality have been identified, parameter independence and outcome independence. Parameter independence means that the behaviour of one system does not depend upon the particular choice of property to be measured on another system. Outcome independence means that the behaviour of one system does not depend upon the particular outcome of a measurement of another system. As Shimony points out (Shimony, , pg. 90), a violation of parameter independence would permit a form of controllable nonlocality which could lead to a violation of relativistic constraints. A violation of outcome independence can at most lead merely to a form of non controllable nonlocality, and indeed the phenomenon of entanglement provides a prima facie case of violation of outcome independence.
The existence of entanglement demonstrates that any model of quantum phenomena will of necessity need to manifest some form of nonlocality. Bell’s theorem provides additional evidence for nonlocality by demonstrating that any model of quantum mechanics involving hidden variables that possesses both forms of locality will of necessity lead to predictions that will violate the inequalities. These inequalities generally involve systems that are separated spatially. Bell’s work involved spacelike separated systems and was extended by Leggett and Garg Leggett to include systems that are separated temporally. While the phenomenon of entanglement demonstrates that nonlocality must be a feature at the observable level, the Bell and LeggettGarg results show that nonlocality must be a feature of any model at any level.
There is, however, a subtle assumption in the formulation of Bell’s Theorem which has never been questioned until recent times. This assumption is that the probability theory to be associated with any classical process must be Kolmogorovian in nature, so that the formula for the calculation of the individual correlation functions must be Kolmogorovian in form. It is well known that the corresponding formula in the quantum mechanical case is nonKolmogorovian, as it contains an interference term. The assumption that a classical system must necessarily follow a Kolmogorovian probability structure does not seem to be questioned in the literature on Bell’s theorem. Accepting that assumption then forces any hidden variable model to possess an inherent nonlocality. But if this assumption is false, as has been conjectured by Palmer Palmer and Khrennikov Khrennikov as discussed below, then the conclusion that a hidden variable model must be nonlocal no longer holds. The following section discusses the idea of nonKolmogorovian probability and demonstrates that it is not necessarily a perquisite of quantum mechanics and can apply to classical systems as well, especially those generated by iterated function systems and games. This will suggest that a change in the model used to express the underlying dynamics to include iterated function systems or game based dynamics might open the door to realist local or quasilocal models.
V NonKolmogorovian Probability
The concept of probability is fundamental to the interpretation of quantum mechanics. It is also the source of much of the conceptual confusion and paradoxes that confound quantum mechanics. Hidden variable theorems based upon Bell type inequalities involving relations between various correlation functions depend intimately upon the structure of the probability theory within which these functions are defined and constructed. Indeed that is the very point used by Palmer Palmer in his demonstration of a hidden variable model of quantum mechanics using iterated function systems. In that model he showed that it was impossible to construct the required three state correlation functions on account of chaotic effects. As a result the Bell inequalities could be defeated.
Most people are used to probability theory based upon the axioms of Kolmogorov. The situation in probability today is similar to that in geometry in the 19th century. For two thousand years, Euclidean geometry had held sway as the one true model of geometry. This conception eventually gave way to the realization that there were actually many different types of geometry, just as there turned out to be many different forms of logic and of set theory. It turns out that there are many different forms of probability theory as well. Probabilities associated to classical events are generally held to be modeled exclusively by Kolmogorov type probability theory. That assumption forms a necessary part of the creation of the various Bell inequalities. That this assumption turns out not to be true forms the central thesis of this section.
Probability theory began in the 17th century in the correspondence between Pascal and Fermat on games of chance. There the goal was to aid the gambler in making decisions that would enable them to maximize their profit from playing these games. Probability was linked directly to the idea of frequency. The probability of an outcome was the limiting value of the fraction of times that the event occurred during the play of the game as the number of plays was extended to infinite time. Any individual play of a game resulted in fractions that varied from this number but with repeated play the average of these fractions would tend to the limiting value. This is a consequence of Gauss’s celebrated law of large numbers which shows that in repeated measurements of this value the distribution of individual measurements of the value follows a normal distribution. The beautiful mathematical properties of the normal distribution led to its widespread misapplication for more than a century even though natural phenomena manifest events that follow a diversity of probability distributions. This has had serious consequences in fields such as psychology, medicine and economics West
Probability theory was placed on firm mathematical ground in the early 20th century by Kolmogorov, who formulated a set of formal axioms based on set theory and analysis, and subsequently elaborated by Carathéodory with his work on measure theory. Crucial to Kolmogorov’s theory are three axioms.

Let be a set and . A probability measure is a map from to such that and for disjoint sets

The conditional probability of given is defined as

Events are said to be independent if .
The definition of conditional probability above yields the following formula of total probability: For any partition of , and any ,, . Quantum mechanics changed this profoundly. Feynman Feynman asserts that
The concept of probability is not altered in quantum mechanics. When we say that the probability of a certain outcome of an experiment is p, we mean the conventional thing, i.e., that if the experiment is repeated many times, one expects that the fraction of those which give the outcome in question is roughly p. What is changed, and changed radically, is the method of calculating probabilities.
The wave function of nonrelativistic quantum mechanics is most often viewed as giving rise to a probability distribution of the form . This simple interpretation, attributed to Born, actually belies a deep subtlety. Consider the case in which one has a system upon which one may perform two different measurements resulting in the dichotomous outcomes and . Kolmogorov theory shows that the sum of probabilities takes the form
However, if one attempts the same calculation in a quantum mechanical setting using the Born rule then one obtains the formula
instead. From this result alone it is clear that quantum probability theory is of a nonKolmogorovian type. Indeed as noted by Khrennikov, inequalities of Bell type were developed much earlier in probability theory dating back to the time of Boole and arise in situations in which one attempts to determine correlations when it is impossible to define those correlations using a single Kolmogorov probabilty space. Khrennikov Khrennikov comments that even Kolmogorov in his original writings on probability was more sophisticated than later writers. He writes “For him (Kolmogorov) it was totally clear that it is very naive to expect that all experimental contexts can be described by a single (perhaps huge) probability space”(Khrennikov, , pg. 26). It may not to possible to measure all of these observables simultaneously. The issue is then whether or not a set of observables exhibits probabilistic compatibility or incompatibility, that is, whether it is possible to construct a single probability space serving for the entire family. It is certainly not true in quantum mechanics due to the noncommutative nature of the set of self adjoint operators representing quantum measurements and the presence of interference terms arising from probabilities based on Born’s rule. It is also not true of classical system in general, something that has been mostly ignored (Simpson’s paradox in the social sciences is an example of this).
Nonlocality is not required in order to obtain those results. They are wholly dependent upon the nature of the observables being measured and whether or not a single Kolmogorov probability space can be constructed. These observations raise questions as to whether it is absolutely impossible to have local hidden variable models at the lowest levels.
v.1 Failure of Additivity
The rule of additivity is fundamental in Kolmogorov’s formulation of the laws of probability, providing one of its axioms. It holds, mostly, in nonrelativistic quantum mechanics. Indeed suppose that a single quantum system can exist in one of a set of distinct energy eigenstates . These energy eigenstates form an orthonormal set of functions in some Hilbert space. Suppose that the system is now created in a linear superposition of these energy eigenstates as is permitted by the Schrodinger equation. The wave function for this superposition will take the form where the weights are chosen so that which ensures that can be interpreted in its own right as a probability distribution. This guarantees that .
When the energy of such a quantum system is measured it will yield a single value corresponding to one of these energy eigenstates. If the system is subjected to repeated measurements of its energy it will remain in the same energy eigenstate. This is considered due to the collapse of the wave function that occurs as a result of the measurement process. If multiple identically created copies of the system have their energies measured then these energies will be distributed according to the probability distribution given by .
The expectation value of the energy is given by . Suppose though that one asks a slightly different question, namely, fix some region of space, say , and ask what is the expectation value of the energy over the region . This is calculated as . Rewriting yields . One sees that the new probability associated to each energy is no longer but rather the more complicated . This is due to the fact that the wave functions may overlap on . It is only in the context of the entire spacetime that the wave functions are orthogonal. There is no guarantee that so these probabilities are not additive even though the events, namely the cover the range of possible energy values.
Also note that in constructing a superposition state one is in essence constructing a sum of probabilities for if is the eigenvalue associated with eigenstate then the probability based upon the wave function of a superposition becomes
This problem is frequently considered to be a feature of quantum mechanics because the quantum mechanical formalism allows for the phenomenon of quantum interference. That it appears in the classical realm as well is illustrated by the following simple model. Most everyone is familiar with the Danish children’s toy, LEGO. Typical LEGO pieces are blocks of plastic having tiny solid cylinders protruding on the top surface of the block and corresponding cylindrical tubes in place on the undersurface. There are plates that can be used for mounting LEGO block structures. Consider the following scenario. There is a mounting block fixed inside a sealed box. Within the box is a bag containing a block and a block. There is dial on the outside of the box which reads 0,1,2. When the dial is set, a reading is taken of the plate and a light turns on corresponding to whether there is no block on the plate (0), a block, whether alone or combined with a block (1) or a block again alone or in combination with a block (2). The examiner cannot look in the box and in fact has no knowledge of the contents of the box. They can only switch the dial and note whether or not a light appears. In another room a researcher can remotely arrange whatever they like on the plate: no block, a , or a block and they change the arrangement immediately following each observation of the examiner. Clearly the probabilities of no block, a block or a block are all 1/3. Therefore for the examiner the probabilities of obtaining a light for 0,1,2 are all .
Now let us change the game slightly. The researcher is now permitted to take no action, place a or a block on the plate, or to couple the block to the top of the block and affix this to the plate. Setting the dial to 1 or 2 results in a light so long as the corresponding block is present regardless of whether it is alone or in combination. Note that it is impossible in this arrangement to measure for 1 and 2 simultaneously. Now what is the probability of there being a light on 1? This probability is because there is a probability of there being a single block and a probability of there being a combination. The same holds for the probability of a light on 2, while the probability of a light on 0 remains . Note that now . As far as the examiner is concerned, the outcomes are disjoint but the sum is not additive to 1.
A standard argument to correct this problem is to assert that the space of alternatives has been incorrectly constructed. If the examiner is allowed to look at the blocks then they might argue that only the global configurations constitute allowable events and these decompose into four equal probabilities, and then the probabilities of occurrence of the individual smaller blocks can be determined using conditional probabilities as per the Kolmogorov scheme. In such a case the probability of a 1x1 block becomes: =1/4 0 + 1/4 1 + 1/4 1 + 1/4 0 = 1/2, which is the result given above. But this is a mathematical cheat because it assumes knowledge that the examiner does not and cannot possess. From the point of view of the examiner the space of alternatives was correctly constructed and they are disjoint. However they must also accept the necessity to introduce an interaction term, or to accept a nonstandard form for the calculation of the total probability, namely .
Arguing that this scenario is contrived is also a cheat because this is precisely the situation for the experimental physicist. Measurement devices provide only the results of measurements, they do not yield the states of the systems being measured which cannot be directly observed. The idea of an particle being in a superposition comes out of theory, not direct observation. As in many cases experiments are contrived to create a collection of particles in a predetermined state so that the examiner has some knowledge beforehand. If no such knowledge is obtainable, or if simultaneous measurements cannot be made it may not be possible to confirm the existence of such interaction states so as to expand the space of alternatives in such a manner so as to preseve the Kolmogorov property. The preservation of Kolmogorovian probability appears to require that one begin with the most basic ‘natural kinds’ from which all other functions are derived, but if we do not know that combinations exist we can only deal with the event set in hand.
Interference creates a failure of the usual additivity in the quantum mechanical case and in this classical case as well. Thus one must accept that the Kolmogorov axioms may work well in many circumstances but there may be other situations in which they fail, and instead of denying the validity of these alternative situations, we should embrace the idea that, just as in the acceptance of nonEuclidean geometry, we should accept the idea of nonKolmogorovian probability theories. There is nothing a priori wrong with the question that the examiner asks, nor the interpretation made of the conditions under which the question is to be answered, unless one requires that the answer follow the conditions of Kolmogorovian probability theory. Instead this very simple example urges us to accept the existence of nonKolmogorovian probability theories, even in the classical setting, and in situations in which the basic elements of observations are derived from prior conditions that are able to interact or superpose in some manner. Such possibilities are abundant in quantum mechanics but also in the life and social sciences. Khrennikov has emphasized this point in his extensive writings on nonKolmogorovian probability theory Khrennikov . The problem arises in this example because the Lego pieces are able to interact. The problem arises in quantum mechanics because in a superposition state the individual eigenstates interact. There is no fundamental difference between these two cases. Kolmogorov theory presumes that there is no interaction between individual events and that distinct events correspond to distinct natural kinds. This example might seem trivial yet it lies at the heart of the problem of measurement. When we consider an electron, for example, in a superposition of distinct energy states, what exactly do we mean? When we ask the question of its energy, we are asking exactly the question of the examiner above whether or not when we observe the electron do we observe one of its supposed constituent energy states. We do not think of the superposed electron as a different natural kind from the nonsuperposed electrons. Rather we think of an electron in a particular state, and that very same electron can change state into one of the energy eigenstates or back into a different superposed energy state. The electron is the natural kind, not the state. Moreover an electron, to the best knowledge available today, does not appear to be composed of smaller natural kinds, it is a single whole.
The point to be made is that in any model providing a realist interpretation of quantum mechanics it is necessary to pay close attention to the subtle nature of interactions among the various elements that make up the model. One must be very careful not to project fundamental features of Kolmogorovian probability theory onto nonKolmogorovian probability theories. These are subtle conceptual and logical errors which I suspect have arisen time and time again in our attempts to understand quantum mechanics. Over the past century we have become comfortable with nonEuclidean geometry and such logical errors no longer plague the field. Hopefully the same may one day be true of quantum mechanics. The most important consideration in constructing an alternative model of quantum mechanics is to ensure that the nonKolmogorovian nature of the probabilities be preserved in the model.
There are other issues at play besides the type of constitutional interference as noted above. The inability to construct a single space upon which all of the probability functions can be constructed is another feature that is frequently ignored, even in the classical application of Kolmogorovian probability theory. This problem arises when one has a collection of distinct suitable state spaces upon which Kolmogorovian probabilities are developed and then one attempts to combine these into a single space in order to calculate correlations and conditional probabilities and still expect the original individual probabilities to be derivable. Probability theorists have known for a century that such a construction is not always possible and yet time and again researchers proceed as if they can carry out such a construction.
In the model to be constructed below we shall utilize combinatorial games with tokens which frequently give rise to nonKolmogorovian probability structures.
v.2 Failure of Stationarity
Let us consider another classical example. Consider the following iterated function system, denoted . Consider a simple block into which we place different numbers. For example one might configure the block as which can be applied to such a block. Transformation interchanges the elements in the first row. Thus . There are two transformations
Applying to this set induces the following transformations where a move to the right represents an application of and a move down represents an application of .
If we start with and repeatedly apply we will end up with a collection of possible sequences of blocks which can be represented in the form of a tree in which an arrow down and to the left means apply and down and to the right means apply .
Each layer represents a possible outcome after a fixed iterations. In order to determine the probability of observing a particular outcome one must sum up the number of paths leading to said outcome and then divide by the total number of possible paths. Summing over the paths leading to each outcome leads to a tree diagram
Dividing by the total number of paths at each level gives
We now find the probability for a given outcome by summing over the probabilities for all paths leading to the outcome. This yields the following probability distributions:
Level , Level , Level , Level .
Thus as we successively iterate the system, the probability distributions at successive times oscillate .
It is important to note that no probability distribution has been assigned a priori to the choices of the elements . If a probability is preassigned then the above probabilities need to be modified by multiplying each path segment by the probability assigned to the particular path choice, either or .
The above model provides a simple, nondeterministic dynamical system which is entirely classical, where the probabilities are determined by a discrete version of real valued path integrals, and which yield temporally oscillating, spatially nonstationary probability distributions. The point of this example is to highlight the fact that quantum mechanical systems are not alone in having a probability structure that can be calculated utilizing path integrals. Combinatorial based classical systems such as the example described above and many combinatorial games possess this path integral structure. Whether or not there exists a limiting stationary probability distribution over the state space depends upon the tree structure induced by the dynamics of the combinatorial operations. Most iterated function systems involving actions on a continuous real space require some form of contraction so as to ensure that an invariant or stationary measure exists on the state space.
v.3 Failure of the Law of Total Probability
Let us stay with the block space. Consider a second iterated function system acting on the same space of blocks. Call it . This time we have a single function acting on the block space. The action of on any block is to interchange the first and second columns. That is
The action of on the block space is simple: and .
Again starting with block and repeating the procedure of the previous section yields the following probability distributions:
Level , Level , Level , Level .
Note that the distribution for is distinct from in the previous example and that these represent two distinct iterated function systems. Now let us consider the iterated function system on the block space generated by . Again start with block a. Applying either of or yields the outcomes . Applying the maps to these outcomes yields outcomes . Outcomes are repeated in the above listing as each represents a distinct path down the tree. Applying the maps once more yields the 27 outcomes . The probability distributions are thus
Level , Level , Level , Level .
We may denote this iterated function system as . If we combine their probability distributions then we would obtain
Level , Level , Level , Level .
Note that , and . Thus although these probabilities should add according to the usual notions of probability theory they do not because there is an interaction effect. In this case, although and are distinct iterated functions systems and their superposition gives rise to a perfectly good iterated function system, the resulting probability distribution functions cannot be obtained from a simple weighted sum of the individual prior probability distributions because there is an interaction, namely .
This is a simple example but it bears a formal similarity to the situation in quantum mechanics where one considers linear superpositions of eigenfunctions. In both cases, difficulties arise with the usual composition of probability distribution functions because of interaction effects, usually function overlap in the case of quantum mechanical systems, algebraic effects in the simple iterated function system discussed here. The significance of this example is that this demonstrates the failure of additivity even in the case of a classical system with real valued functions. Quantum mechanics is not necessary for such nonKolmogorovian effects to appear.
v.4 Failure of Bell’s Theorem
Let us now consider the following pair of single player combinatorial games. They are not very interesting as games but they illustrate a feature of games which is that they can defeat the Bell inequality under certain conditions. For this example consider a pair of games played out on the previously defined blocks , one game using the transformation and the other . We consider sequential game play, and we are interested in the outcome following every two steps of play. As described previously, Bell’s original theorem involves relationships among three correlation functions based upon spin measurements on a pair of entangled particles. In this example we consider correlation functions based on trajectories defined by repeated game play, with different initial conditions replacing the different orientations of measurement.
We consider three initial conditions, . In the correlations defined below, the first variable refers to the game generated by and the second to that generated by . Measurements of have defined values of respectively.
Play using or yield distinct trajectories. Nevertheless when we restrict ourselves to two play games we note that and so that we always obtain constant trajectories, namely just the initial condition.
Bell’s inequality takes the form
Note that only the expectations values are important, not the circumstances under which they were generated. It is only important that the measurement values in directions be . So long as we ensure that under two conditions the measurement values also be then we meet the essential mathematical requirements of Bell’s Theorem. Clearly in this simple example if we choose initial conditions then we shall obtain measured expectation values of respectively. Choose for the third initial condition the block . Bell’s inequality takes the form
Since the play corresponds to simply applying the identity to the initial condition, the probability of observing each initial condition is 1 as is the probability of observing the pair of initial conditions, so that the expectation value of the measurement of the product becomes simply the product of the measurements. Therefore calculations of these correlations yields
or
which is clearly false. Thus we have a simple discrete, classical system which nevertheless exhibits correlations that violate the Bell inequality. The violation of the inequality holds for this special triple of initial conditions just as the violation of Bell’s theorem in quantum mechanics occurs for certain measurement directions.
This model is intentionally simplistic. The point is to demonstrate that the assumption that a classical dynamical system must be describable by a Kolmogorov type probability theory is not actually correct. This example, while involving simple one player games, may also be understood as a deterministic dynamical system. As such it is deterministic and local and there is no interaction between the two systems. The coupling arises because of the choice of initial conditions and the processes themselves. The coupling is not at the level of the individual events but rather at the level of the dynamics generating those events. It also demonstrates that the conclusion from Bell’s theorem that only a deterministic nonlocal hidden variable theory is capable of describing quantum mechanical phenomena is not necessarily true. This observation is in keeping with Palmer, who showed that an iterated function system may reproduce quantum mechanical spin statistics while still avoiding Bell’s theorem. Palmer gets around Bell by showing that the necessary correlation functions fail to exist. This simple example shows that the theorem may be defeated directly. In situations in which the dynamics is generated by games (and possibly iterated function systems as well), the probability structure need not be Kolmogorov and consequently it may be possible to defeat the Bell inequality.
Khrennikov summed up these insights, stating “Violation of Bell’s inequality is merely an exhibition of nonKolmogorovness of quantum probability, i.e. the impossibility of representing all quantum correlations as correlations with respect to a single Kolmogorov probability space”(Khrennikov, , pg. 6). Khrennikov has developed these ideas of nonKolgorovian probability in his Växjö model of contextual probability theory. The details of this approach are not necessary here but it generalizes the addition formula for quantum probabilities and applies this to classical events. What is important is that the most fundamental assumption of Bell that classical events must follow the rules of Kolmogorovian probability theory is not true in general and so the conclusions derived from Bell’s theorem related to the necessity of nonlocality in any hidden variable model of quantum mechanics are also not universally valid. Inspired by these ideas we turn now to a set of mathematical approaches which capture this idea of nonKolmogorovness in classical settings and so open the door to realist quasilocal hidden variable models of quantum mechanics.
Vi Process Theory
Process is a construct well recognized in psychology and biology, for example the emergence theories of Trofimova Trofimova and Varela Varela . In physics the notion of process generally refers to an interaction between entities that unfolds in time. Processes may change certain dynamical parameters associated with continuous symmetries such as energy, position or momentum. Examples of these include scattering, state transitions, capture and emission. Processes may change certain discrete parameters associated with intrinsic characteristics like charge, charm, strangeness, lepton or baryon number. Creation, annihilation and decay are examples of these. Conceptually, fundamental physical entities enter into or emerge from various processes but their very existence is not viewed as arising out of process.
The view of Whitehead stands in marked contrast. Whitehead views reality as emerging out of a lower level of reality consisting of actual occasions. Fundamental physical entities are viewed as emergent configurations of actual occasions. An analog lies in attempts in the 1980’s to model reality as a cellular automaton where particles appeared as patterns manifesting over time on the cellular automaton lattice wolfram . Process theories, particularly the theory of Whitehead Whitehead , possess several essential features that need to be considered in creating a representational system that expresses them.

The basic elements of experience, actual occasions, have a richer character than is generally attributed to elements of reality. Actual occasions possess a dual character. On the one hand they form a fundamental component of the fabric of reality. On the other hand, they serve as information for the creation of subsequent actual occasions.

Process theory is a generative theory. The actual occasions that form the essence of reality come into existence through a process of prehension, in which the information residue of prior occasions is interpreted and new occasions generated creatively in a nondeterministic manner.

Actual occasions are transient in nature. They arise, linger briefly and then fade away. In contrast to current physical thinking, process theory asserts the existence of a transient ‘now’.

Essential to process is the idea of becoming. This is subtly different from the notion of generation. For example, an iterated function system generates a trajectory by the repeated application of the function to a previous point: , , , However, the space upon which this function acts exists a priori. A trajectory in the space is generated, the space itself is not. In process theory, the space itself does not exist a priori, indeed it does not exist at all except as an idealization in some mathematical universe. All that exists is a collection of actual occasions that are continually in the process of becoming. An actual occasion has no existence unless and until it is brought into existence through the action of prehension. It subsequently fades into nonexistence and any future influence that it might have arises solely through its representation in some form of memory.

In process theory events are fundamentally discrete, being comprised of vast numbers of actual occasions. The perception of events as being continuous may well be a deeply ingrained illusion. Its abstraction in mathematical form has given rise to powerful analytical tools, so much so that continuity has been reified as a property of spacetime and histories and entities. Process theory asserts that entities and their motion are actually discrete and their apparent continuity is again a consequence of the process of idealization inherent in the formation of an interpretation.

Actual occasions are held to be holistic entities. It may be convenient for conceptual, descriptive or analytical purposes to consider actual occasions as consisting of individual ‘parts’, but this again constitutes an idealization or contrivance. Each actual occasion must be considered to be a whole unto itself and any information or influence attributed to an actual occasion must be attributed to the actual occasion as a whole and not to any of its supposed parts. Any such parts must be considered to be unobservable as must any presumed properties or characteristics of these parts. Properties and characteristics that may be observed by other actual occasions must be attributed solely to the actual occasion as a whole.
From a process perspective all matter would be viewed as emergent, arising from the evolution of actual occasions. These actual occasions would not be accessible to material entities in much the way that mind is incapable of sensing the actions of individual neurons, even though mind is emergent from the actions of neurons. Being emergent, the laws governing the behaviour of actual occasions need not be those of quantum mechanics, though the behaviour of entities emerging at the lowest spatiotemporal scales should obey those laws. Likewise, entities emerging from these fundamental quantum entities need not obey the laws of quantum mechanics, or at least quantum mechanical laws need not be relevant for understanding their behaviour, just as the laws governing the action of mind are not the same as the laws governing the neurons giving rise to it. This is a common situation in the theory of emergence. Although lower level entities may give rise to higher level entities, the relationship between these two may be such that there is no oneone correspondence between the behaviours at one level and those at the other level, so that the laws governing the lower level become irrelevant for understanding behaviour at the higher level.
There are two other aspects of process that deserve mention. First of all, process has an inherently non local character. A biological organism is an expression of a vast array of processes but these processes are local only in a naive and superficial sense. The entities that participate in these processes are distributed widely in space and time. Mental processes cannot even be localized to the brain as the body and the environment play important roles. Secondly, the entities that participate in process are often fungible. Although a whole organism may not be fungible, its constituent molecules most certainly are, and sometimes components are not even of the same species as the organism, particularly in the case of digestive processes. A board game such as Chess is a simple example of a process. Although individual moves of chess pieces are local, the choice of which piece to move on a given play is inherently non local, though certain game positions may favour local or non local choices. Chess is fungible, so long as any replacement respects the current arrangement of pieces. Chess can be played anywhere, at any time, with almost any objects, real or virtual, so long as a suitable correspondence is established between the objects and their movements and their roles as chess pieces. Processes in themselves exist in an abstract, aspatial and atemporal world, while the actual occasions that they generate manifest in space and time and bear specific relations to one another that are interpreted as properties.
Vii Process Interpretation of the Wave Function
There are subtleties of dynamics that are not easily captured by the standard functional analytic formulation of quantum mechanics. Consider the issue of being ‘bound ’. Classically it is a fairly straightforward matter to determine whether a particle is bound to another particle because the trajectory of the bound particle will form a closed path with the binding particle lying in the interior of the path. In quantum mechanics this is not so straightforward since particles do not follow trajectories that can be mapped. A free or a bound particle can, in principle, be found anywhere in space. Its mere detection says little about its dynamical state. A detailed determination of the shape of the wave function would help but is unfeasible. Moreover a free particle could be stationary and have a spherically symmetric wave function just like a bound particle. The main difference is that the probability of the free particle being near the centre is fairly large while for the bound particle is it fairly small. A free particle can propagate but how does one distinguish the random motion of a bound particle and the propagation of a free particle, when both can appear more or less anywhere at any time? Moreover, in the case of a spherical potential, the potential extends throughout all of space and so in considering when a free particle becomes bound it is not clear when exactly one is to apply the bound equation and not the free particle equation. In principle the free particle could become bound anywhere and at any time. And if it is bound to one particle could it not also become bound to another particle? To every other particle? The equation describing the dynamics of the particle must change between free and bound conditions and so an additional consideration must come into play to determine when this takes place. What is this additional consideration and how does a particle ‘know’ when to apply it?
The classical interpretation of the wave equation is that it provides a probability distribution for the position of the quantum system. More precisely it provides a probability distribution for a detection by a position measurement apparatus, said detection usually attributed to the presence of a particle in that location at that time. This is not a problem given an ensemble view of the wave function, whereby repeated measurements of an ensemble of identically created particles are conducted and the probability distribution of those measurements calculated. There are problems, however, when one wishes to attribute the probability distribution to a single particle or to attribute a physical reality to the wave function much as one attributes reality to the electromagnetic wave function. In the case of the latter there are demonstrable effects having an electric and magnetic character which can be attributed to the electromagnetic wave so its reality is not really questioned anymore. The Schrödinger wave functions are quite different in character. Although Aharonov and Vaidman Aharonov have suggested that the wave function of a single particle could be detected using quantum nondemolition measurements, it is only certain statistical measures that can be detected, not the wave function itself.
Consider again the situation of a particle in a spherical potential, this time in the bound state. If the particle is in an eigenstate of the Hamiltonian, say being in energy level , with angular momentum and spin angular momentum , then the wave function takes the form
where is the radial wave function (real valued), is the associated Legendre polynomial (real valued) and the normalization constant.
The probability distribution for this particle is given by
Now the Hamiltonian in this case is time independent and so one would expect that the probability distribution would also be time independent and that is indeed the case.
Consider now the case in which the particle is in a superposition of adjacent energy levels. The wave function is this case is given by
The probability distribution in this case is given as
In this case even though the Hamiltonian remains time independent the probability distribution function now acquires a temporal fluctuation by virtue of an interaction term between the two eigenstates. Thus one no longer has a stationary probability distribution. However, the time average of this probability distribution is
which is the usual probability distribution expected from combining the individual distributions. The loss of stationarity would appear to make any attempt to determine this probability distribution experimentally either difficult or impossible. Even in the case of a non demolition experiment it would be impossible to determine the distribution without synchronizing position sampling to the frequency of the fluctuation and without knowing the phase delay, both of which would require measuring the differences in energy levels and spin angular momenta between the two states which would appear to require a demolition experiment.
If one attempted to measure the probability distribution with a single particle this would have to be done at a series of distinct times, say . The functions being sampled at each time would differ, being
If one happened to be sampling at the same frequency as the fluctuation, then one would obtain the mean distribution shifted by a systematic drift term .
If one knew the phase delay one might offset it, obtaining the average distribution. If one samples the times uniformly and randomly, then these fluctuations would, on average, cancel each other out, again leaving the average distribution. However, the average distribution is not the wave function, since the fluctuating term is not simply a random variation but rather an integral part of the wave function. Indeed the mean wave function is what would be expected from Kolmogorovian probability theory, which we already know to be inconsistent with quantum mechanics.
In this case we see that the only way in which the actual wave function can be detected is if it were possible to carry out a series of quantum nondemolition experiments on an ensemble of particles, not a single particle. One could not simply measure the frequency with which particles appear since such a distribution would actually have to be measured over time, and thus one would not obtain the actual distribution but only a time averaged version. The actual distribution would require an ensemble of particles whose positions could be sampled simultaneously at repeated times, the frequencies being determined for each individual time. The fluctuating distribution thus has meaning only in relation to an ensemble of particles since it is only with an ensemble that it can be measured at all. It is not at all clear how attributing a probability distribution to a single particle in this case would make any sense.
These considerations suggest problems in the interpretation of the wave function, at least in so far as single particles are concerned. For the most part, treating the wave function as an expression of ensemble behaviour is consistent with experiment as well being theoretically consistent. It admits the possibility, at least in principle, of experimental verification. In the case of single particles, however, it appears no longer possible, in general, to verify it experimentally. Suppose for the moment that we consider the possibility that the probability interpretation of the wave function is a consequence of the statistical character of ensemble behaviour and that it simply does not apply to single particles. What then might the wave function represent?
These two considerations suggest that NRQM may indeed be incomplete and that additional features are needed. Suppose though that these additional factors arise because the two distinct aspects of ultimate reality  actual occasions and the processes that generate them  were conflated when the original mathematical framework of NRQM was developed. Formally, NRQM takes many of the features of classical mechanics, particularly its Hamiltonian formulation, and attempts to effect a translation to a slightly more general mathematics  from point set analysis to functional analysis. Process per se is not explicitly considered in the functional analytic framework. Perhaps it would be better to look for mathematical systems that are better equipped to represent process and then see whether NRQM could be derived within this setting. Indeed, Palmer has already shown that iterated function systems may reproduce many of the essential features of quantum mechanics, at least spin statistics Palmer . In this paper the focus is upon models based on certain types of combinatorial games.
Suppose that the wave function actually describes information about the process responsible for the generation of a single particle. Suppose further that the probability interpretation arises in an emergent manner in the context of a statistical ensemble of particles. Note that in most quantum mechanical formulas, particularly in path integral formulations and in quantum field theory, the wave function enters into the Lagrangian, usually coupled either to itself or to the wave function of another particle. Suppose, therefore, that the wave function describes some kind of ‘strength’ of the generating process. Different processes would then couple through these different process strengths. Positional probability arises merely when a fundamental particle couples to a position measurement device, and that turn out to be a fairly basic coupling dependent upon a term of the form . In this sense the probability aspect is not an intrinsic feature of the wave function but rather an emergent feature arising out of the interaction between the particle and the measurement device. Given such an interpretation, a single particle could indeed possess a physical wave function, which describes not the particle per se but rather the process that generates the events that we ultimately interpret as a particle. Contradictions arise because we attribute the wave function incorrectly to the particle rather than to the process.
Suppose for the moment that we allow the possibility that the phenomenon that we term particle is not a thing in itself but rather is an emergent manifestation of something more primitive; that a particle is generated and that the links between occurrences of a particle possess an informational aspect. Suppose further that the actual occurrences that are the direct manifestations of these processes occur on a spatiotemporal scale much smaller than that of the particles being generated, so small that they would be inherently unobservable to any usual material entity. Any such occurrence, being so small would not be resolvable, and so would appear to any material entity as a rather illdefined or fuzzy object. Let us further suppose that we represent this fuzzy primitive entity as a spatiotemporal transient. As a simple example, suppose we let each such transient have the functional form (in one dimension)
Each such occasion manifests its process. The strength of this process is given by the value of the wave function for the process attributed to the peak of the transient. Therefore at each point the process contributes a transient of the form
An observable event becomes a summation over a multitude of these primitive events. Give an observation at point z, we associate a set of points of the form such that is an element of the real interval formed by filling in the gaps in . To the point we can associate a function
Assume a collection of observations such that the corresponding intervals are disjoint. Then by the ShannonWeinerKotel’nikov theorem, as the number of observations increases, the resulting sum of contributions will converge to a function defined on the entire real line. This interpolated function becomes the wave function of the particle. The particle thus moves discretely but due to the small scale we only observe and interact with the interpolation  the wave. In this way a particle has both wavelike and particlelike aspects but there is no contradiction and no paradox  it is merely a question of scale.
Viii Game Theory
viii.1 Combinatorial Games
A combinatorial game is a mathematical abstraction of games that are commonly played in real life such as TicTacToe, Dots and Boxes, Checkers, Chess, Go and so on. Combinatorial games involve players who carry out moves in an alternating manner in the absence of random elements and in the presence of perfect information. The end of play is generally heralded by an inability of the players to make a move, the last player able to move being declared the winner. Combinatorial games are to be distinguished from the games usually studied in economics and biology in which players may move simultaneously in the presence of complete or incomplete information, in which there may be random elements, and in which the end of play is measured relative to some optimality criterion applied across all possible game plays.
The formal theory of combinatorial games began with SpragueGrundy in the 1930’s but became a mature branch of mathematics in the 1970’s with the work of John H. Conway and others Conway . A close cousin, the EhrenfeuchtFraisse game, has been used extensively in mathematical logic and model theory to construct representations of formal systems. The focus here is on Conway’s theory, which has its most developed expression in the study of short determinate two player partisan games, though research continues to expand the theory to long indeterminate multiplayer games with generalized outcomes. Short deterministic two player partisan games form a partially ordered Abelian group. Moreover, there exists a subgroup of such games that can be interpreted as numbers and constitute the expanded field of surreal numbers. The same group admits additional elements that have an interpretation as infinitesimals, extending the field to include elements of nonstandard analysis.
In the combinatorial games discussed below, it is assumed that there are two players, Left and Right, who move alternately, possess possible moves that are distinct from one another, and possess complete information about the state of the game during any play. Moreover, the nature of the game is such that play is guaranteed to end after a finite number of plays. The state of the game at any play can be fully determined. The options for a given player from a particular state of the game are simply the set of all states of the game that can follow a single play of the game by that player. There will be a distinct set of Left options and of Right options. Go is an example of such a game. The definition of a combinatorial game does, however, include one additional assumption, which is that the last player to play wins. Many games do not satisfy these conditions but are still capable of analysis within this framework. The particular definition used was chosen because of its generality and the depth of its mathematical results, but there have since been many generalizations to include transfinite play, loopy play, misère play, play with different outcome determinants, and multiple players.
The play of a game begins with some initial state (position or configuration). A player moves, resulting in a new position. The other player then moves, again resulting in a new position, and the process repeats. The complete play of the game is thus described as a (finite) sequence of such positions, terminating when no further play is possible. For convenience we can catalogue all possible sequences of game play by constructing a game tree. Denote the players as Left and Right. Starting from a particular position, we arrange below and to the left, all possible positions that can be achieved by a move on the part of Left. Similarly we arrange below and to the right, all possible positions that can be achieved by a move on the part of Right. The process is then repeated for this new level of game and so on until no more positions can be achieved. A particular complete play of the game will correspond to a path down the game tree, beginning with the initial position and then proceeding to successive positions by alternating along left and right steps.
The formal definition of a combinatorial game is conceptually quite confusing at first but it possesses great generality. It is inherently recursive and most constructions in combinatorial game theory arise through some (implicit) form of bootstrapping or through topdown induction. The technique is powerful and worth the mental effort to master it. Since each game begins with a game position we can define a game by that initial position. Play then becomes a sequence of games rather than a sequence of positions. Moreover each position has associated with it a specific set of positions obtainable by a move of Left, termed Left options, and another specific set of positions obtainable by a move of Right, termed Right options. Since the subsequent play of the game will depend only upon these sets of options a position may just as well be equated with these two sets of options. Therefore we associate any position with its set of Left options and its set of Right options . Now for the confusing part. Taking each position to be a game, each set of options can be viewed as a set of games. Hence we define a game where and are sets of games. A game has an alternative definition in terms of its game tree, which represents the possible moves for each player from any given game position.
The fundamental theorem of combinatorial games states that given a game with players and such that moves first, either can force a win moving first, or can force a win moving second, but not both.
This results in four distinct outcome classes for games.
These are

Positive: can force a win regardless of who goes first

Negative: can force a win regardless of who goes first

Zero: The second player to play can force a win

Fuzzy: The first player to play can force a win
Positive, negative and zero games form the class of surreal numbers under addition as defined below. The fuzzy games form the class of infinitesimals. Positivitynegativity is a symmetry operation given by reversal of the roles of Left and Right. For nonpartizan games (the Left and Right options are always the same), there are only two outcome classes, fuzzy and zero.
The formal definitions are Conway :

A combinatorial game is given as where and are sets of games

The sum of games is defined as

The negative of a game G, is defined as

For two games , equality is defined by if for all games , has the same outcome as

For two games, , isomorphism is defined as if and have the same game tree

For two games, , we say that if for all games , Left wins whenever Left wins

A game is a number if all elements of and are numbers and for all and .
For any integer , a game in which Left has free moves is assigned the number , while any game in which Right has free moves is assigned the number . In the case of short games, the number assigned will be a dyadic rational, i.e. an integer of the form for some integers . The number of the sum of two games that are numbers is the sum of the numbers of the individual games. Multiplication and division can be defined on games that are numbers in such a way that these games form a field, the surreal numbers. Surreal numbers that are non dyadic rationals arise through a consideration of games of transfinite length and include the rationals, the reals and the ordinals. They may be generated using techniques similar to that of Dedekind cuts for the creation of the reals.
Games can be generated recursively starting with the simplest game . This is the game with no options at all. Call this day 0. At day 1, one may construct four possible games, , and denoted 0, 1, 1 and * respectively. There are 36 games at day 2, 1474 games at day 3 and at day 4 somewhere between and games Albert .
If tokens are added to these games then it becomes possible to form a vector space, and with a suitable notion of commutation, a Lie algebra. The significance of this it that is now becomes possible, at least in principle, to use token combinatorial games as representations of Lie algebras, and thus of the fundamental processes of nature visàvis their description in the standard model of quantum field theory.
viii.2 Game sums and products
One distinct advantage of combinatorial game theory is that it admits many different kinds of linearity, that is, many different kinds of sum may be defined. In every case, however, a key feature of a sum is that on any given play the player whose turn it is may play either in or in . Thus play alternates between the two games but not necessarily in a sequential manner. The combinatorial or disjoint sum defined in the previous section describes games that may be thought of as being played out on different boards. The interpretation of is that on a given play a move in one game will have no effect upon play in the other game. Let us assume, however, that the distinct games are being played out on the same board, albeit with different pieces or tokens. The disjoint sum may still be defined in this case so long as play of one game has no effect upon play of the other game. An example might be where tokens from different games may be applied to a single location on the board without affecting one another in any way. Another occurs when play is localized to nonoverlapping regions of the board. In physics an example of such a situation is in the dynamics of bosons, where multiple bosons may occuy the same spatiotemporal location without affecting one another in any manner.
We may define an exclusive sum of denoted . In the exclusive sum, no move of may occur on a site occupied by a piece of and viceversa, otherwise there are no restrictions on game play. There is a weak kind of interaction present between these two games but play of one game is not determined by the other, merely constrained sometimes and at some locations. It is a rather passive kind of interaction and there is no real interchange of information between the two games. Physically speaking, there is no exchange of energy between the games. An example in physics of such a situation is in the dynamics of fermions, which are not allowed to occupy identical states.
A third sum may be define when certain moves of influence which moves of may subsequently be played and viceversa. In this case, information from the play of one game has an effect upon the subsequent play of the other game and so an exchange of relevant information indeed takes place. Play may or may not be exclusive. In physics, the situation of a binding of two particles would be describable by such a sum. We denote an interactive sum by , but understand that this does not define a fixed form of game play. Rather, its interpretation will depend upon the particular games and their context. At best one can say that it will be an element from a set of possible forms of game play. Note that technically an exclusive sum is really a form of interactive sum but it is singled out because of its ubiquity and the fact that it represents more of an avoidance of interaction than interaction.
These different sums may be understood in terms of the game tree. The formula for shows that the game tree is built up in a rather complicated manner. Let denote the set of all game positions for . Likewise for . Then . Next one adds edges as follows. For any given game position , let denote the subtree consisting of all subsequent left moves in with similar notation for right moves and for . Then from any combined position the set of subsequent edges is given as . In the exclusive sum , the game tree is given as the game tree of minus all edges and positions corresponding to plays and plays appearing on the same board locations. The game tree will be a proper subtree of the either the tree for or for .
In addition we have several different notions for the product of two games. The combinatorial product defined above involves non simultaneous play and corresponds to the arithmetic product when restricted to games that are also surreal numbers. In these additional products the key notion is that on any given play the player whose turn it is must play both and simultaneously. In the direct product , and are played simultaneously but freely. In the exclusive direct product, , and are played simultaneously but never on the same board site, which will in general necessitate some rule for breaking such moves. Finally there is a notion of an interactive product, which again corresponds to simultaneous play but where moves are no longer free for each game but are restricted or even coupled to varying degrees giving rise to a collection of different games.
Sums would appear to best describe the generation of particles in superpositions of eigenstates since we want only a single informon to manifest at any step of game play. Products would better describe the situation of multiple particles since they allow multiple games to be played simultaneously, corresponding to the manifesting of multiple particles simultaneously. However there may be situations in which the generation of particles must occur sequentially and in such cases sums must be used. That might occur in the case of fermions since identical states are to be avoided. Since no such constraint applies to multiple bosons, they presumably may be described by products.
viii.3 Combinatorial Token Games
The idea of a combinatorial game with tokens is used extensively so a few words are in order. Most combinatorial games are played on some kind of board using pieces or some kinds of marks which distinguish moves. Typical examples would be Chess, Checkers, Go, Hackenbush, TicTacToe. A token is simply some kind of object that is placed on the board and which conveys information relevant to the play of the game. For example, chess pieces, by their association to specific roles, determine the kinds of moves available to them. Tokens may have mathematical or physical properties in their own right which can be useful to the play of the game. In specifying a combinatorial game with tokens we are considering situations in which tokens are either created or modified in the course of game play and the operations that may be performed upon these tokens enables one to construct combinations of games. The reality games to be described below are played out on a causal manifold and tokens take the form of certain functions or vectors. Various operations may be defined on these tokens and used to define new games or to combine games.
For example given a complex number and a token game if we lay down a token in then in the game we lay down a token , where some property of the token is modified by . This enables the sums and products defined above to be expanded into more complex algebraic forms. In many cases, the number of tokens in a token game is constrained to some fixed value independent of the length of play. This is certainly true of a game like Chess. In other cases the number of tokens is determined solely by the possible length of play. In either case in a game formed as a sum of subgames according to one may interpret the term to mean that the fraction of tokens assigned to the game is where is the total number of tokens (as a function of length of game play ). In some situations such as the reality game played below, both interpretations may hold simultaneously, so that may modify both the number of tokens and some property of the tokens.
viii.4 EhrenfeuchtFraissé Games
Games may beused to analyze the structure of mathematical theories and to compare structures within these theories. A brief digression to explore the idea of an EhrenfeuchtFraissé will set the stage for the use of games to generate structures. The EhrenfeuchtFraissé game hodges appears in the study of mathematical logic where it is used to determine whether two structures may be viewed as expressing the same set of properties from the perspective of a specific logical theory. Mathematical logic consists of a collection of formal sentences constructed according to specified rules from an alphabet consisting of constant symbols, variable symbols, relational symbols, quantifiers and logical connectives. A sentence in formal logic has a counterpart in natural language but its formal nature makes it amenable to mathematical analysis. There are in addition a collection of rules which determine how one may create new sentences out of a preexisting collection of sentences and which ensure that the new collection remains logically consistent and coherent. These are formal analogues of the laws of deduction taught in courses in philosophy and reasoning.
A first order language consists of a collection of symbols having different interpretations and formed into finite length strings according to a predetermined set of rules. The rules are designed to maintain consistency in the interpretation of these formulas or sentences. The basic symbols are constants , variables functions relations =, and the logical quantifiers , , , , , . A term consists of a constant, variable, or function of constants and/or variables. A closed term has no variables. An atomic formula consists of s= t where s and t are terms or a relation of terms. A formula consists of a finite application of the logical quantifiers to a collection of atomic formulas. A variable is free if it is not within the scope of some quantifier. A sentence is a formula having no free variables. A theory is a collection of sentences. A model of a theory is a mathematical structure such that each constant in the theory corresponds to an element of the structure, each function and relation of the theory corresponds to a function and relation of the structure, and such that every sentence of the theory may be interpreted in the model and found to be true.
Often one wishes to understand the explanatory power of a theory. Does a theory, for example, describe everything about a particular model or are some features left unmentioned? Is the theory powerful enough to distinguish between specific models? An answer to the latter question can often be obtained through the play of an EhrenfuechtFraissé game.
Suppose that one is given two mathematical structures , and one wishes to determine whether or not these two structures can be distinguished using a theory expressed in the language of first order logic. Assume that there are two players I, II and furthermore assume that play occurs for exactly n moves, where n is fixed in advance.
The game play is extraordinarily simple. Player I moves first and is free to choose any element they like from either or . Player II then moves and may pick any element they like but only from the structure that Player I did not choose from. Play is repeated but with the caveat that at each step each player must choose a point that has not already been chosen. If there are no such elements to choose from then they simply forfeit their turn. Play continues in this way until a total of n steps have been played.
At the end of play one determines which of the two players has won the game. Let be the element of structure selected at the ith move (whether by Player I or II) and let be the element of structure selected at the ith move. One says that Player II wins the game if, whenever a relation R holds in for a sequence of elements then it also holds for the corresponding elements of . Otherwise one says that Player I wins.
A strategy is a systematic procedure which tells a player how to move following a particular series of game plays. For example, one could simply pick an element at random. Usually one is interested in strategies that are deterministic, meaning that given a particular sequence of points selected in previous game plays there is a unique point to be selected on the current play. Such a strategy is called deterministic since the choices are fixed in advance. If Player II possesses a deterministic strategy which guarantees a win in n plays against Player I no matter how Player I plays, then we say that the game is determined and write .
Returning to theory, given a formal sentence and a model , we write if one can find elements, constants and relations in corresponding to those in so that the precise relations expressed by are satisfied by these corresponding relations in . If for every logical formula having at most n quantifiers if and only if , then we write .
The power of EhrenfeuchtFraisse games arises from the fact that if and only if . The game is often much easier to use to solve the logic problem than are logic tools alone. Thus games may be used heuristically without any ontological attribution being made as to the nature and status of the players.
viii.5 Generative Games and Forcing
Another important question facing logicians is to determine when a logical theory actually possesses a model and to exhibit such a structure. One of the most famous examples of this was the continuum hypothesis. This question concerns the sizes of sets and in particular whether the set of real numbers and the set of all subsets of natural numbers have the same size (cardinality). Cohen showed that is was possible to find models of set theory which extended the usual set theory, one of which satisfied the continuum hypothesis and one which did not. In this way he solved a long standing foundational problem in mathematical logic. The technique that he used to create such models involves a method called forcing hodges .
The details are very technical but begin with the idea of a notion of consistency. A notion of consistency enables one to determine which theories actually possess models. Not all theories possess models. For example, the theory given by the single sentence has no model. Theories can be built up step by step provided that at each step one maintains consistency among the statements of the theory. This follows from the compactness theorem which states that if every finite subset of a theory possesses a model then the theory itself possesses a model. Building a theory step by step in this manner requires some notion of consistency. Formally, a notion of consistency is a collection of sets of sentences of which satisfy certain rules of logical consistency. For example, if and is any closed term in , then is in . As an example involving a sentence, suppose that lies in some subset . Then either or lies in , but not both. There are seventeen such rules whose details are not necessary here (see hodges ). Each element is called a condition. The idea is that each condition consists of a collection of formal sentences that are logically consistent. The important point is that if is a notion of consistency and is a condition of , then has a model.
This is proven by virtue of a game. Assume that there are two players, I and II. The number of plays of the game is fixed in advance and described by some infinite ordinal number. The players alternate in making a move, Player I playing first. The goal of the game is to construct an increasing set of conditions possessing a model at each stage, and then forcing the final union of all of these conditions to have a model as well. At each stage of the construction different tasks are assigned according to each of the seventeen rules and these tasks are performed in such a way that only a finite number of new elements are added to the previously constructed condition. For example, one task might be as follows: given some condition constructed up to this point, one selects a closed term and, if it is not already present in , one adds to . Similarly, suppose that has already been constructed and that the formula lies in . Then this task might be to add either or , but not both. Whenever a limit ordinal is reached one simply assigns it the condition formed by taking the union over all previously constructed conditions. The tasks are each repeated a sufficient number of times to ensure that at the end of the construction no possible moves have been left undone. One possible strategy is to assume that a sufficient number of steps are carried out at each stage of the construction so as to parse at least once through the collection of all possible instances of all possible rules. Of course that will in general amount to a transfinite number of tasks to be performed at each stage of the construction and possibly a transfinite number of stages to complete the construction. That this procedure works is due to the recursive nature of the ordinals upon which this inductive process depends. Additional constraints may be placed on the choices made at each step. Finally some criterion is established which determines who wins the game. In other words the set of all possible sequences of play is partitioned into two disjoint subsets, one consisting of all wins for Player I and the other for all possible wins for Player II.
One begins with a particular first order language and enlarges to form a new language by adding a set of new constants, called witnesses. A notion of forcing for is a notion of consistency which satisfies the following two conditions:

if is a condition in and a closed term (meaning no variables) in and is a witness which does not appear in either or , then lies in

at most only finitely many witnesses appear in any
Let us restrict ourselves to games in which there are only a countable number of steps. Let be some property that we would like our model to possess. One introduces witnesses and atomic formulae describing the expression of the property. We allow players I and II to alternate play as above, carrying out all of the necessary tasks and incorporating these witnessed formulae into the notion of consistency. If at the end of play the union of the chain of created conditions has property then we say that Player II wins. If Player II has a strategy which enables them to win no matter how Player I plays, then the property is said to be enforceable.
The importance of forcing is that it allows us to build up a structure step by step using a particular kind of game and ensure that it possesses a particular property. The game approach is not only simpler in many cases than the axiomatic approach but it possesses the generative character that we seek for any model based on process theory. A more general technique for constructing classes of mathematical structures using games is presented in Hirsch and Hodgkinson Hirsch .
Ix Archetypal Dynamics
Although information has been considered to play a significant role in governing the behaviour of organisms for nearly a century, it is only recently that ideas of information began to appear in the physical literature based upon the connection between information and entropy as proposed by Shannon and Weaver Shannon , which actually refers to the capacity for information and not its content. Ironically, information became a focus of interest only after it was stripped of any meaning. The program of archetypal dynamics is an attempt to provide a conceptual framework for studying the role of meaning laden information across disciplinary boundaries, particularly in those situations in which emergent phenomena appear. It postulates that the various entities of reality arise from and exist within particular conditions, that interactions among themselves and with the larger environment exhibit patterns, consistencies and constraints, all of which admit an effective conceptualization termed a semantic frame. The semantic frame gives meaning to the fundamental ontological questions of who, what, when, where, how and why. The semantic frame gives meaning to the entities and events and it is presumed that interactions between and with these entities are governed by flows of information whose meaning is imparted by the semantic frame.
In Archetypal Dynamics Sulis , the behaviour of entities is held to be determined by salient, meaning laden information, which each entity detects and to which each entity responds according to its nature. The saliency of information is determined by each entity itself and is a consequence of its internal dynamics. Salience is considered to be a precursor to actual meaning. Salience is manifest in the phenomenon of transient induced global response synchronization (TIGoRS), in which an entity is capable of forming a differentiated pattern of responses in reaction to distinct patterns. Those patterns that induce the greatest convergence among responses are termed salient. A related concept is that of compatibility, first introduced by Trofimova in her pioneering studies of emergent models of dynamical networks termed ensembles with variable structures (EVS), and used to determine when agents enter into the formation of dynamical linkages Trofimova1 .
The Fundamental Triad of archetypal dynamics refers to realisations (the entities comprising the aspect of reality under consideration), interpretations (the semantic frames used by these entities or an observer to guide behaviours and interactions  exemplars of which are termed archetypes) and representations (the formal, linguistic or symbolic systems used to describe the realisationinterpretation relationship).
Representations have generally taken the form of explanatory narratives, archetypal imagery, or mathematical models. The apparent success of many mathematical models, particularly in the physical sciences, has sometimes led to the belief that the mathematical depictions or descriptions of reality actually are reality. This reification hides the fact that these mathematical models are idealizations of reality. From the standpoint of archetypal dynamics these mathematical theories constitute an interpretation of reality and particular mathematical models form archetypes. These archetypes are ideals to which reality may sometimes form a close approximation under particular conditions and circumstances. They are not reality. In the real world there is no infinity of entities, nor infinite volumes, temperatures, energies or masses. Nevertheless, under particular conditions some aspects of reality may behave in ways that closely mimic the idealization. Usually these conditions are those in which fluctuations of certain properties or the effects of extraneous influences or scales can be minimized for at least the duration of observations. Under such conditions one may think of reality as a finite approximation to the idealization (archetype) or conversely, the archetype as an infinite limit idealization of the reality.
Archetypal dynamics explicitly distinguishes between reality (realisation) and archetype (interpretation) and emphasizes the effective nature of the interpretation by taking notice of the particular conditions under which the semantic frame associated with the interpretation provides an effective description and interpretation of reality. Archetypal dynamics asserts that all physical theories, indeed all theories whatsoever, are at best effective theories, which hold under particular sets of conditions and interactions and demonstrate diminishing efficacy as these constraints are progressively violated. Even the most universal of physical laws come into question under conditions in which the symmetries underlying their existence fail to hold, or at extremes of scale where certain assumptions such as the continuity or existence of spacetime come into question.
Rather than seeking a universal theory of everything, archetypal dynamics sees the world as being governed by a kaleidoscope of effective theories that interact with one another at the condition boundaries. Meaning becomes the relevant currency of exchange but meaning applies only conditionally and creativity arises in those regions where one set of conditions gives way to another. Emergence is viewed as a fundamental aspect of reality, with entities arising out of a cocreative interplay between realisation and interpretation, which serves to stabilize the conditions for their existence and persistence.
In the archetypal dynamics perspective reality is always creative and in flux. Entities are conditional and therefore transient in nature. Entities, information, and meaning come into existence, persist, and fade away. There is a notion in the physical literature of a law of conservation of information but this is a bizarre idea derived from the unitary evolution of quantum systems and which certainly does not apply to meaning laden information. The idealizations that represent meaning can be thought of as existing in some Platonic universe but their applicability to reality fluctuates as the necessary conditions pass into and out of existence. The metaphysics of archetypal dynamics stands in contrast to the deterministic world view that has dominated the physical sciences. It posits a reality that is always in the process of becoming, that is always changing, that is fundamentally transient.
X Interpolation Theory
Interest in the use of interpolation theory was inspired by the work of Kempf Kempf who used interpolation theory to provide a bridge between discrete and continuous representations of spacetime and quantum fields. In physics, the state of a system is most commonly represented by a vector, defined over some field, usually the reals or complex numbers, together with an inner product, and having finite or countable components. These components are defined relative to a basis, which consists of a collection of vectors, none of which can be expressed as a sum of the others (independence). The number of vectors in the basis gives the dimension of the vector space. The components of a vector are given by the inner product of with the different vectors that constitute the basis, i.e. . The significance of the basis is that we can write each vector as a unique sum .
In classical mechanics each vector specifies a particular measurable property of the system, such as position or momentum. In quantum mechanics each vector is infinite dimensional and is usually interpreted as giving a probability distribution from which the distribution of measurable properties may be obtained.
In quantum mechanics the components of such a decomposition do not provide the results of any measurement directly. It is only the vector as a whole which can be ascribed a measurement outcome. Such vectors must therefore be treated holistically and do not represent any kind of generation or evolution.
The decomposition of vectors into a sum of basis vectors is a powerful technique mathematically which explains its widespread usage. But the notion of independence required to define a basis turns out to be too strong for many applications. This approach treats the vector space and basis vectors as having a prior existence and in the standard Fourier series approach the coefficients are defined as integrals over the base space of the basis functions. This may work in a static universe framework but certainly not in a process framework in which spacetime is being generated. If one is to remain faithful to and consistent with the generative approach, then another method of generating functions is needed. Fortunately there is an alternative to the basis representation of vectors which is more general and which does permit a generative interpretation.
The simplest such approach is provided by the theory of function interpolation and in particular, by the WhittakerShannonKotel nikov Theorem (WSK Theorem)Zayed . The idea is to begin with a sampling of a function at a countable set of points and from the sampled values attempt to reconstruct the original function. The theorem originated in signal theory and versions are used today in the digitization and reconstruction of audiovisual signals. As it stands the idea is too vague since an infinite number of functions can be constructed from any countable set of values. In signal theory the normal way to limit this plethora is to restrict consideration to entire functions lying in or which are band limited, meaning that their Fourier transforms are nonzero within a bounded interval of frequency space, usually . The interpolated function is constructed as a sum over the sampled values of the form
Unlike the basis construction, the coefficients of this expansion do represent actual values of the function at the sampled points. In this representation one can think of the function as being constructed from a discrete set of events , namely the samples at the points . Each event is interpreted as a function and these are then superimposed to obtain the final function. The utility of this is that each actual occasion may now be interpreted as a continuous function which, unlike the occasion itself which is localized in spacetime, now extends throughout the entire spacetime. The functions of physics may now be seen as idealizations in which the number of samples is infinite and past, present and future information persists indefinitely. One may think of the function as being generated through the incorporation of ever more points into the construction and these points may be considered as temporal, spatial, or spatiotemporal as one likes. At first glance this might suggest that the past must persist into the future in order for these functions to be constructed but as they are merely interpretations of experience it is only necessary that information concerning the past persist in the present moment so that each current actual occasion is capable of generating an appropriate interpretation. Such information is already encoded in each actual occasion in the form of its content. Thus one may think of the function as being generated from the continual creation of actual occasions through the persistence of information residues.
The simplest interpolation models are those in which the functions are all derived from a single template function . The simplest means of doing so is through the use of a translation operator so that one may write
One may then write the original equation in the form
where one may more easily consider to be constructed as a sum of primitive events each of the form .
The WhittakerShannonKotel nikov theorem asserts that if lies in and its Fourier components lie within the range to for some frequency (band limited), and if we sample the function at a (infinite) set of discrete times seconds apart, then we can write
This series is in fact both absolutely and uniformly convergent on compact sets. This is a stronger result than for the usual Fourier series representations.
Writing we can rewrite the formula above in the following form:
where .
Of course this will converge to the original function only in the case that one has a sampling over an infinite collection of points. In reality one will have only a finite sample to work with and so there will be errors arising from the undersampling. There are several results dealing with the effect of these truncation errors but the simplest is perhaps the earliest discovered. Sampling over points, we define the truncation error to be
Let , , and where is the Fourier transform of . is called the total energy of . Then