On Quantum Resonance


Welcome to my Quantum Philosophy Blog

Wavefunction and Space Topology

Quote: “Someday we’ll understand the whole thing as one single marvelous vision that will seem so overwhelmingly simple and beautiful that we may say to each other; ‘Oh, how could we have been so stupid for so long? How could it have been otherwise!’ (J. A. Wheeler)”

I will expand my thoughts on the issue of wavefunction symmetry based on various papers from David Bohm, Louis deBroglie, Dürr, Holland, Bell, Goldstein, Tumulka, Rovelli, and Struyve.

The wavefunction is used in quantum mechanics to describe physical systems. It is a function of space that maps the possible states of the system into probability amplitudes — elements in a complex Hilbert space — as the squares of the absolute values that give the probability distribution that the system will be in any of the possible states; either a complex vector with finite or infinite number of components or a complex function of one or more real variables. For systems with multiple particles, the underlying space represents the possible configurations of all the particles and the wave function describes the probabilities of those configurations.

A paradox appears when one now tries to measure position and momentum and the probability is turned into a real numbers. The decisions where to measure violates the temporal causality of the particle being measured. Also the particle or wave duality is based on the same issue. To avoid causality paradoxes, the wave function collapse is required in the Copenhagen interpretation. Louis de Broglie offered in 1927 at the Solvay Congress in Brussels a solution which he called pilot-wave theory but abandoned it after it was criticized. In the pilot-wave theory the particle is guided by a field that is represented by the wave-function. David Bohm’s ontological interpretation is similar to de Broglie’s pilot-wave and based on hidden variables or – as Bell called them – beables of defined particle position and momentum.

Quantum theory explains the behavior by means of ‘microscopic’ systems represented by a wavefunction ψ, which is defined by the microscopic initial wavefunction and its eventual collapse caused by a ‘macroscopic observer’ (human or not). Definitions of ‘microscopic’ and ‘macroscopic’ systems, or ‘observer’ and ‘system’ are unfortunately ambiguous and present a serious flaw. If the macroscopic system is regarded as a collection of microscopic systems, then the wavefunction of the total system should also evolve according to Schrödinger equations without collapse. The problem has finally been shown by Bell, that any realistic theory which leads to the same statistical predictions as standard quantum theory must be non-local. Non-locality refers to the effectiveness of fields in quantum theory. Just because a field is non-local and is effective anywhere at the same time, that does not imply that signals or energy can be transferred at higher than the speed of light. The non-locality manifests itself by the fact that the position of one-particle beable may depend on the positions of other particle beables. This dependence is instantaneous no matter how far apart the other particle beables may be located. In this way the problem that the ‘observing macroscopic’ system is magically defining the causal reality of the ‘observable microscopic’ is avoided and it becomes clear that both entities are wavefunctions.

Problems arise for a transcript of the quantum mechanical interpretation of non-relativistic quantum theory to relativistic wave equations: a) identifying a future-causal current which can be interpreted as a particle probability current; b) defining a positive definite inner product; and c) particles and anti-particles must be freely created and annihilated without the need of considering negative energy states. The above led to quantum field theory, where field operators take over the role of the particle operators in non-relativistic quantum theory. Therefore, fields rather than particles seem to be the most natural beables in the pilot-wave approach.

I propose that the pilot-wave field must be represented in space topology (space-time geometry) in the same way as spin-foam models are used for Rovelli’s non-perturbative quantum gravity. Imagine it as a space topology that exhibits non-local density changes that because of general relativity (GR) are not relative to a background time. Rovelli also defined the concept of a partial observable as any quantity that can be measured even without prediction. What GR can predict is the correlations between partial observables, which are manifest as ‘propagators’ between possible and actual Hilbert spaces. I see the pilot-wavefunction as describing propagator ‘density grooves’ in space topology that the energy between observer and measurer can travel in. Feynman therefore described particle paths as an integral over all possible ‘grooves’.

If the complete microscopic system is not only defined by the wavefunction, but also by some extra variables that have an objective existence, they should also determine the outcome of experiments. The Bohm interpretation is causal but non-local and non-relativistic: Every particle travels in a definite path with definite position and velocity. We do not know what that path is but can find out what path (density groove) a particle traveled while uncertainty of position and momentum remains.

I propose that the use of topology to describe the particle causality phenomenon is appealing because it is simple. It allows to disregard all sorts of other information that confuses the issue. Connectivity and tensors are all that remains. Like others I propose that demanding classical causality at all times, is most likely reducing the possible solutions to a problem substantially. Einstein has offered us many great advances but that does not mean that all his propositions are valid conclusions, such as that energy must at all times be a cause of gravity. Einstein failed to distinguish between the active gravitational mass of matter, the contribution to the density of matter, and the inert mass of matter. A particle can clearly exhibit an inert momentum and not have mass or cause gravity. That gravity changes the space-time tension and makes photons follow the tensors sounds dramatically like the ‘density grooves’ of the wavefunction. And what about the phase transitions exhibited by for example the Bose-Einstein condensate? Are those not plausibly connected to a change of topological degrees of freedom? Meaning that the number of possible patterns that the space topology can take is reduced as the amount of potential energy in space-time tensors is reduced.

To me that sounds as if a timeless, non-local topology of space is the only thing that is common when we talk about any kind of field or particle!

Filed under: Cosmology, Quantum Field, Space Topology

Vacuum Topology and Symmetry

A vacuum symmetry related thought with a cosmological aspect: Either a) the total energy of the universe was injected at the Big Bang and thus the universe is not symmetric, or b) the universe is symmetric and all the energy cancels itself out. The first law of Thermodynamics mandates conservation of energy, therefore I propose that this suggests b) a symmetric universe. I know that not all laws can be inverted but this seems utterly plausible to me. I will need to investigate Emmy Noether on this. And if the universe exhibits perfect symmetry now then either it started off with a huge amount of energy or it did start out at zero energy and positive and negative energy was created in sync. That seems the more sensible conclusion. If so, it puts many theories into a new light.

A spacetime symmetry related thought: A photon moving forward in time requires another one to move backward, as clearly shown in Feynman diagrams of electrons shedding a photon.  Therefore time must involve a symmetry too. And it does: The photon wave function remains nergetically linked with its parent electron until absorbed by the new foster parent maybe 50 lightyears away/later.

Hey, the most plausible for the collapse of the wavefunction being no more than a silly idea is that it is not symmetric. It could be if the photon is seen as a field of the electron. QM actually links them tightly and a wavefunction is the mathematical version of this field. If a wavefunction collapses instantaneously then it would have to builds up timelessly as well. Symmetry defines systems in which under continuous (i.e rotation described by Lie Groups) or discrete (i.e. reflection) transformations, aspects of these systems are unchanged. Allowing a wavefunction to behave totally differernt when it is expanding (travel at the speed of light) and when it collapses (instantaneously) would break the invariance requirement of symmetry under arbitrary differentiable coordinate transformations.

So can we assume that the photon wavefunction instantly inhabits space as long as the electron holds this energy state? If fields are timeless and non-local then the photon-field of an electron is there ‘after’ its energy has been transformed. A photon  therefore is not an independent entity but ACTUALLY IS the field entity of the higher energy potential of an electron. A photon traveling through space is a virtual particle or symmetry transformation between two electrons and that too is shown in Feynman diagrams. The gauge fields are just mathematical devices needed to syncronize the time scales.

What if the amount of energy loss of the photons is not related to the square of the distance but to time? Time is expended in relationsship to how many photon wave functions have to be stacked on top of each other to kick the receiving electron into a higher energy level. Entangled photons are caused by a symmetric wavefunction that conserves energy between electron and prhoton and thus state changes can happen non-locally and timeless. The idea throws out most of GR. Shucks. Frequency is one per second so without time no wave.

Another idea: What if the vacuum Higgs condensate  is simply passing the energy representing the photons from one quantum oscillator to others. As energy is spread to more and more lowest state electrons each transfer to is a clock tick. But ‘Dirac Sea’ electrons have mass and charge so something is missing in the model – except if would be a superfluid of electrons in a single quantum state but then the photon would travel timelessly. If not an electron it must be a kind of fermion for the Pauli Exclusion to work. In supersymmetry the assumption is that the vacuum is also filled with LSPs or Lightest Supersymmetric Partner or some call it WIMP making up the dark matter… sigh, more stuff filling the vacuum.

Time is maybe not a stable dimensional feature of space, but an inherent property (to avoid causal consequence) of an energy conserving transfer and thus a symmetry. It would neither be spacetime nor a continuum. Ouch, here I am again bumping into the GR dogma. For time to flow you need to expend energy. The photon wave function is timelessly suspended until the energy is transferred to the receiver because of time quantization? That would also allow for the strange non-local paradoxons of the EPR experiments.

Hm, the energy conservation between gravity and mass must be a symmetry too. And how does all that relate to spontaneous symmetry breaking?

Filed under: Space Topology, Spacetime, Symmetry, Vacuum


There seems to be a relationship between the descriptive powers of mathematical logic and nature, but we must be careful to always remember that we are still making an assumption as to the applicability of mathematics. We have yet to find a formula that is applicable beyond a certain magnitude of scale. Kurt Gödel further proved with his Incompleteness Theorem that there are always questions based on a given theorem that can not be answered by it.

An important contribution to finding working models of nature was provided by German mathematician Emmy Noether. Noether’s Theorem states that for every continuous symmetry in the laws of physics there is a corresponding conservation law of energy. It is one of the few theorems that applies to classical as well a quantum physics, simply showing how fundamental Emmy Noether’s discovery is. Symmetries are a fundamental structural ordering concepts of our universe, much as the Fibonacci series that simply adds each number with its previous (1+1+2+3+5+8+13+21…). There are for example only 17 ornamental patterns designs possible in two dimensions as proven in 1924 by George Pólya. Symmetries are simple mathematical concepts that repeat throughout some structure in some predictable form.

Inertia of massive bodies must also be a kind of symmetry as it requires a conservation law. Inertia seems to be connected to mass only but what if mass is just an energy state supplied by the Higgs field (also called a Boson particle), does inertia apply to all kind of spinning fields too? According to Noether’s theorem, rotational symmetry requires energy conservation of angular momentum. Angular momentum can be a body circling another held by a force such as gravity or a body spinning around its axis. The effect can be clearly seen when spinning a gyroscope, frisbee or top toy. In weightless vacuum it will spin forever. Conservation of angular momentum can be seen when a skater pulls in his arms to speed up a spin. The conversion of diameter to speed of rotation involves a symmetry. The transport of energy from one entity to another (which means ANY entity interaction) must be through a symmetry operation to retain its energy.

Where does the stability of an atom come from? Why do atomic particles not follow the second law of thermodynamics and dissipate to their lowest energy state? Why are electrons trapped in an electromagnetic relationship to the nucleus and vibrating there at various multiples of its base frequency with related amounts of energy. Pauli’s Exclusion Principle describes the effect as a law, meaning it says that two entities can’t occupy the same location or exactly the same energy state. The principle does however not explain WHY that happens but it is just an observation inside the mathematical model that we use. Mexican hat shaped energy potentials are possibly one cause of the location but not of the exclusion. The conservation of angular momentum is an important element for the stability of many dynamic systems (riding a bicycle for example) because the system also requires external energy not only to change its rotational speed but also to change the angle of its axis of spin. It can therefore be assumed that conservation of angular momentum is responsible for the stability of nuclear structures.

What about the spin of particles related to energy conservation of angular momentum?  Spin is most likely a kind of wave phase angle rotation along its orbital path. Spin-1/2 requires two turns to complete and looks like a Moebius loop. How can spin have angular momentum when it is a point particle? What is spinning if there is no physical particle but just some medium vibrating? Can a spinning entity have any arbitrary amount of momentum? It seems nearly obvious that also here we find a quantization effect linked to Planck’s constant. Angular momentums are always multiples of Planck’s constant divided by 2 times the number Pi (π) also  written as ‘h-bar’. There is something rotating in the size of the Planck length. Inertia is apparently not just a feature of mass and the residual energy of mass at rest comes from the energy potential stored in the summary momentum of all its particles.

We can imagine a wave on a water surface and the density changes in a body as sounds waves travel through it, but what are we to imagine to be the spinning wave of a particle? That is not answered by the Standard Model nor any other current theory. We can assume that the geometric topology of space exhibits cyclic density changes in its structure, which are calculated in terms of symmetry operations by Lie Group transformations of its tensors.  There is no distinction between fields and particles on a quantum level and thus I propose that the ‘field’ a ‘particle’ apparently ’causes’ is a symmetry too. The localized positive energy density spike of the ‘particle’ is most likely symmetrically and timelessly balanced by an opposing negative potential of space topology. As a consequence of declaring that these fields propagate at the speed of light, vacuum is assigned truly amazing properties to close the resulting logical gaps. The instantaneous collapse of the wave-function is one of these.

Vacuum topology also carries the gravitational field caused(?) by a massive body which also bends its space-time structure when looked at through Einstein’s General Relativity. In quantum field theory the vacuum is seen as QM oscillators carrying zero-point energy that is responsible for the cosmological constant that causes the universe to expand. In quantum electrodynamics the electromagnetic field is quantized too and, like the harmonic oscillator in quantum mechanics, its lowest state is not zero. Thus, the small zero-point oscillations cause the electron to perform oscillations in its orbital path (Lamb shift). Dirac saw the vacuum is filled with negatively charged ground-state electrons to prevent normal electrons to drop to their lowest ground state due to the Pauli Exclusion principle. Vacuum also supposedly carries the Higgs field (or condensate) with a non-zero expectation value so that its interaction with gauge bosons causes those to acquire mass.

The current discussion around dark energy and dark matter is in my mind kind of interesting, because the total energy of all the above vacuum features would already be immense. That is implausible in a cosmological sense. The universe must exhibit symmetry and the sum of all its energy in all its forms must be ZERO or at most a tiny number – Plancks’ constant? Each force, field or potential is linked to an equal counter action because of symmetry. The energy of any particle/field must be balanced by a counteracting field in a manner that its interaction with the whole universe cancels out. If any of that is connected to Big Crunch/Big Chill theories is pure speculation with less to go on than my blog musings.

That the vacuum is filled with something is shown in particle accelerators. When high-energy quarks are expelled from a proton by the particle collision, they tear up the vacuum structure and particle jets consisting of billions of new particles appear from ‘nowhere’. The huge momentum contained in those quarks is dissipated into the vacuum and ‘creates’ all those particles – showing that they are just energy perturbations of the vacuum topology. To me that is proof enough that all entities are such perturbations. It is likely that the vacuum is not filled with multiple entities but JUST ONE and all its possible energy states are related to symmetry operations. That is in principle the idea behind String Theory, while from my limited perspective Quantum Loop Theory is more plausible. Meaning that only certain structures and numbers between entities are possible and the symmetry transformations represent the possible energy interactions of this universe. Symmetry breaking is caused by the inability to transform some entity topology patterns into others without rupturing the topology structure.

Filed under: Continuum, Mathematics, Theory

The Issue With Spontaneity

Niels Bohr made a very important remark: ‘The task of physics is not to find out how nature is. Physics is only concerned what we can say about it.’ David Lindley likened that to Wittgensteins ‘Whereof we cannot speak, thereof we must be silent.’ I fully agree that science is not about discovering nature but rather creating means to communicate about it. It is about understandable concepts that humans can relate to.

In relationship to causality in physics there is one term that has been used for over a hundred years while its meaning is incredibly ambiguous – the adjective SPONTANEOUS.

Spontaneity describes something that happens natural, unconstrained and unplanned, or from internal forces or causes without external influence. That is not necessarily how it is used in physics. Its synonyms are even more troubling: automatic, impulsive; instinctive, involuntary, reflex, unpremeditated, free, uncompelled, unforced, willful and even casual.

I would say that we describe something as spontaneous when we simply do not know what the cause is. Our lack of knowledge can simply mean that we assume spontaneity wrongly. The very first use of spontaneity has been in thermodynamics, where is was assumed that heat flows spontaneously from a hot to a colder area. Nothing could be more wrong. The cause is the existing potential between higher and lower energy states. While it is apparently so that no one tells or forces a single molecule to pass on its Brownian motion of heat to the next one, that molecule is enticed to act by outside conditions. A claim of spontaneity assumes some decomposable inner function that is the cause of the willful act.

It seems quite obvious that these spontaneous actions emerge from energetic relationships that promote the interaction of entities. Emergence means that there is no decomposable inner functionality, but the function appears ONLY in the relationships between two entities. It has no other independent substance or existence.

In quantum mechanics the proposal is that the probability potential of an entity is the driver of the spontaneous actions of that entity, for example to shed a photon or an alpha particle. As strange as it may seem – and I hope you can follow my reasoning here – the assumption of truly spontaneous acts is the reason for the problems with causality in physics. I even propose it is the reason for possibly all paradoxes such as the wave/particle duality, entanglement and photon tunneling.

The spontaneous shedding creates the existence of that particle that now has be considered and communicated about as such! We say: we don’t know how it got here – but here it is! And then we are utterly surprised that the particle we created out of thin air resists detection and measurement. We start with an unexplained spontaneous act and then we expect causality. We cement the ‘existence’ of that particle or wave packet by probabilizing abouts it Hamiltonians and Lagrangians (momentum and location).

Let me try to say it with Bohr: ‘When we can talk about the probability potential of a particle, it is a statistical description that is helpful, but that is NOT the driver that causes a natural, spontaneous act.’ Probabilities are a post-mortem analysis that allow us to understand the likelihood of something. While I as a person have a certain probability to have a certain accident, it does not tell me what the causes of such an accident would be and also not when it will actually happen. Probability is a correlation of data, it is not causation!

Because Quantum Electrodynamics (QED) – for which Richard Feynman, Julian Schwinger, and Sin-Itiro Tomonaga received the 1965 Nobel Prize in Physics – deals with interactions only, it is the most accurate (to 12 decimals) physical theory thus far. But also QED can only predict the probability of what will happen in an experiment. QED describes charged particle interactions using perturbation theory which are represented as Feynman diagrams. A Feynman diagram assigns each photon exchange path a complex-valued probability amplitude, and the paths with stationary phase (no destructive interference) represent the stationary classical path between the two points. The photon does not cause the energy transfer, but it is just a representation of it. It is not shed spontaneously, but emerges because of the potential between sender and receiver. The wave resonances between the sender and receiver are calculated by QED.

The probability amplitude is a statistical communication means about the abstract potential of one entity such as a photon, but the cause of the spontaneous interaction is the relationship between the two probability amplitudes. The problem is that current theories (QM and GR) do not allow that such two entities interact in a background invariant (relativistic) manner. David Bohm suggested that there are non-local hidden variables that enable that communication. Bell’s Theorem provides experimental proof that local hidden variable theories cannot predict the measurements of distant entangled photons. But what if there are no hidden variables – local or not, but already too many? What if space and time are not a continuum but just two properties of (or consequences meaning caused by) energetic interactions and not the background against which we measure momentum and location? Would that not be true general relativity?

I think we need to shed spontaneity in physics rather than photons and we need to stop looking at the causality of particles hitting other particles. We need continue the direction of QED and drop some of Einsteins assumptions that are still too much linked to classical physics. Entanglement, photon tunneling, and EPR quantum eraser experiments say that we are wrong about time or wrong about the speed of light. You may be surprised that also QED proposes that the photons travel at speeds faster or slower than light, but just average out. Gravity as the only measurement of mass (inertia is not an effect of mass I believe) does interact timelessly at a distance and not with the speed of light.

I spontaneously shed herewith Einstein’s speed-of-light shackles …

Filed under: Questions, Terminology, , , , ,

The Copenhagen Interpretation and its Consequences

In September 1927 at a meeting in Como, Italy, Niels Bohr presented his fairly philosophical Copenhagen Interpretation of Quantum Mechanics. Bohr made it clear that measurement not only disturbs what is being measured, but on a quantum mechanical level DEFINES what is being measured. While that seems only a small step forward, it is so logical that it is very strange that Einstein remained utterly opposed to the interpretation, positioning it as a religious belief rather then a scientific thesis. When you get near or to the smallest energy quanta possible in this universe it is quite plausible that the process of measurement not only disturbs but more or less destroys the existence of what is being measured. As such it is plausible that all effects related to the measured entity disappear. What Einstein was so opposed to was the concept of probability required by quantum mechanics. A photon had a certain probability potential to be measurable (exist?) in a certain location with a certain energy (momentum). Once you interacted with it, that probability potential, which according to Einstein must represent valid information in any given location would disappear from the area of influence at once and this changes the state of a large volume of space faster than the speed of light.

This is clearly a problem when you want to see light as a wave packet that travels through space. It is even more strange when you want to see light as a point (no real size) particle shooting around. It seems quite obvious to me (and I hope to you) that both the light wave and the point particle should be seen as mathematical abstractions only that represent the energy being transferred between two larger entities. In the case of measurement the energy receiving entity is the measurement device! So quite clearly the measurement defines what is being measured. Measurement is a specifically designed interaction chain of causal events that produces some kind of data value that we can infer with some probability to be related to the measured energy transfer. We cannot measure the ‘state’ of an entity but just it’s energy exchanges.  All this is related to Heisenberg’s Uncertainty Principle which is a fundamental phenomenon directly linked to Pauli’s Exclusion principle. Not only can any two wave entities occupy the same space, they can also not be at exactly the same energy level. As we try to measure the location of an entity at a higher and higher accuracy, we less and less can be sure of its momentum (vector and mass) and vice versa. We lose our ability to create causally predictable interaction that produce a reliable measurement. But it is not just a measurement problem but it is an inherent property of the mathematical models that we use. How much it actually is a real property of the universe we really don’t know. If we assume time and space directed causality, then we also assume that the universe actually acts like our mathematical model says. If we measure an electrons momentum with high enough accuracy, we basically spread its potential all over the universe. A small enough entity has the probability to shift its probable location all over the place.

That raises the unpleasant question however is whether all energetic entities on a particle level are no more than abstract mathematical devices for energy interchange? At first it was assumed that light was the only entity that exhibited the wave/particle duality, despite the disturbing situation that all nuclear components followed Schrödinger’s wave form of quantum mechanics. The amazing next step was to take the probability wave into the nucleus to answer the question of what could cause the apparently spontaneous decay of atoms, as found by Madame Curie. The question is closely related to Einstein’s explanation of black body radiation, that led him to the quantum phenomenon of light. If nucleus particles would be held together by strong forces, there is no reason as to why they would suddenly let go and fall apart. There could be external sources of energy that push the nucleus over the edge of being stable but such outside influences apparently have to be quite violent. If we look however at the particles of a nucleus as a probability wave then it would also expand way beyond the principle boundaries of the atom. The particles could at some point in time have the probability to be outside the nucleus and thus the atom falls apart without any other external influence. It was the 24 year old Russian Gamow who first wrote a paper in which he used Schrödinger’s equations to explain alpha radiation in this form. He proposed that it allows the atom to fall apart spontaneously. The alpha particle leaves the atom through an unknown device! Today such activity is called quantum tunneling and is broadly observed and can be used in experiments with photons. The more energetic the particles the shorter the possibly tunnels are in average. There is however some small probability that even a proton might tunnel to the edge of the known universe.

Quantum mechanics cannot explain and does not try to as to why the spontaneous action happens. The probability wave is a POTENTIAL only and cannot be the CAUSE of the atom disintegrating or shedding an alpha particle. Quantum mechanics does not allow for the causality of classical physics. Once again that led to Einsteins opposition. Does however the Copenhagen interpretation not talk about the measurement being the CAUSE of what is being measured, correct? The photon or electron does no longer spontaneously travel trough empty space but it is caused to appear where and when the measuring device is put. Can someone tell me why that should be a measuring phenomenon only? The consequences of applying it to measurement only led to irrational ideas such as Wigner’s participatory universe that only comes into existence when being observed by a conscious entity. The most well known fable is that of Schrödinger’s cat that is in a state of being both dead and alive because we do not know when the spontaneous nuclear decay will trigger the cat’s death until we open the box to look at it. That is quite irrational and documents the problem well. In the other direction we have the multiverse theory that assumes that each potential from quantum probabilities leads to a new universe being created with a different future. The cat is alive in one and dead in the other universe. Modern quantum physics has different models for that, but about that later.

Einstein requested that quantum and classical physics had to accept complementarity, meaning that causal models had to be applicable in both. So what if that is the key error? It is always dangerous to question Einstein – the holy god of physics – but here I go. It seems to me that the main issue is with causality and locality, being measured in time and distance. Einstein had both deeply entrenched in his E=mc2 demanding his special and general relativity must be adhered to in all formulations and thesis. Some particle had to have a certain location and momentum at a certain time, even if we are unable to ascertain it relatively to other entities. But when two entities exchanged energy by being in the same location at the same time then causality must be observed. That was Einsteins own faith that he never wavered from. I propose that it is this unproven faith that kept the great thinker from discovering more intriguing properties of this universe.

Filed under: Spacetime, , , , , , , ,

Causality and Complex Adaptive Systems

In my previous post I discussed why humans are driven to identify causalities. It is a learning pattern for better survival. But that human drive should not make us prone to misjudge the concepts of causality in physics. Causality is at best a rough guess that is probably approximately correct. In complex adaptive systems, causality in its classical physics form has little relevance and the same is true for quantum physics. Also causality is shaped by the means of how it is being searched for.

Looking for a certain patterns already requires a hypothesis and it is the form of the hypothesis that shapes the answer. The search pattern of what you type into Google very much defines of what you can find. The biggest problem with Google and any search is relevance. Most of what we find on Google is utterly irrelevant to our intent. 99.9999% of it. That same is true for all of science. The patterns of your search will define what you find. The language of your mathematics shapes all the answers that you can get. Most of what you get is utterly irrelevant … 99.9999% of it.

The inability to identify the principles behind the gradual development of species over time has led Darwin to assume that it is random changes to our genes that then are weeded out by survival of the fittest. That is most likely wrong, as much as he was right about the true controversy at the time that humans and primates have a common genetic past. Random genetic evolution is in the light of current understanding ever more unlikely. The DNA itself does not cause a certain body to be developed. It is a passive archive, containing protein coding which have to arise by self-organization into a lifeform. Genetic variation does not provide phenotypic variation on which selection may then act. Such chance changes are actually repaired by a correction mechanism in replication to avoid catastrophic gene transcription errors. A mutation that actually causes a phenotypic effect has to occur as a concerted change in many areas of the DNA code. Gene expression is controlled by a higher process that switches genes on and off. Exactly how is not yet understood. There is however a large amount of ‘junk DNA’ that could or seems to have those regulatory effects at a higher level or levels. There is a genetic memory that causes our inborn instincts as neural patterns in our brain. Regulatory gene networks would promote phenotypic changes by reacting to the environment that gene expression fine-tunes as needed also in response to the environment.

It is thus utterly unlikely that  emergence or evolution is random or by chance. This is why religious believers can say from their gut that our existence is not a random senseless event. They are right, but they obviously found answers that were shaped by their questions. Neither emergence nor its consequence evolution is however predictable or reductionable. Even where a reduction to parts or steps is possible it does not answer the question ‘why’ because there is no such answer. Things happen because the environment supports it. There is no cause in the sense that drives happenstance forward, but it needs the inert opportunity or potential and a receptive supportive environment. The potential has to exist in our genes but the environment enables and receives it.

The balance of opportunity potential and receptive environment is a resonance and works an all levels. What science calls either a light wave, a photon particle or a probabilistic quantum wave packet is an opportunity potential in the complex wave that represents what we call an atom. The electron wave of that atom cannot have a causal reason or simply spontaneously shed that photon potential into empty space where it will be suspended in nothing or ripple through quantum field oscillators until it falls off the edge of the universe, or magically returns into it as it folds back onto itself or creates new quantum oscillators that expand the universe forever. That widely spread out opportunity potential (probability wave function) will however collapse as a whole AT ONCE across the universe as soon as we interact with it in any way. That picture makes no sense at all and is in stark contradiction to Einstein. The information and energy content of that wave is eradicated from the universe at higher than the speed of light. Therefore Einstein had to see the photon (and actually also the electron) as a point particle and that means that the photon does not physically exist. It is a mathematical vehicle only. It must be the interaction ONLY that is of relevance. Many models use already ‘virtual photons’ for energy transfers because they do not follow Einstein’s relativistic principles. The Standard Model uses different types of bosons to model those energy/information transfers.

The same concept must be true for all of the universe/nature. Opportunity potential and a receptive environment are the same for photons, DNA replication, and bringing up children or a free market economy. Sender and receiver shape and form the energy of information exchange by resonance. The time arrow of causality or the causal cone of a photon is just that opportunity potential as is the collapsible wave function. Einstein realized that light must be quantized as otherwise there could be no equilibrium in black body energy radiation. We can take that thought a step further. The equilibrium of a black body (thus any physical entity) is caused by the quantization effect of harmonic resonances. Poincaré proved that in a dynamic planetary system only orbits with certain harmonics are stable.

Energy/information is only transferred when two entities share by chance the potential/receptor space AND they exhibit a resonance. The first is a probability function and the other is a harmonic principle. You can try that next time you go into a music store. Find two large contrabass close to each other. Strike one of their strings hard and then check the same string on the other and you will find it vibrating. No other strings will vibrate (actually some harmonic tones will). Me striking the string creates the potential, while the air and the other string being closely enough tuned represent the receptive environment. I am unable to tell you if there is a string that will vibrate, but if it does – I know immediately that it is at the same frequency as mine and that it does not vibrate by chance. I suddenly have an emergent orchestra of vibrating strings.

Causality has from a philosophical viewpoint two integrated elements: The causal potential and the complex harmonic receptor effects. Together they form a complex adaptive system of resonant layers between which new phenomenons emerge where resonances support information/energy exchange. Those layers of emergent properties are on the lowest layers seperated by boundaries of symmetry-breaking phase transitions. More on those in my next posts.

Filed under: Complex Adaptive Systems, Genetics, , , , , , , , , , , ,

Causality in Human Psychology

It seems that we do not have to ask anymore if nature is a complex adaptive system (CAS) as we can see it at work every day. The faithful in some form of Intelligent Design will attribute creative emergence to their Deity of choice but that we can respectfully position that as willful ignorance. ‘No sir, it is too stressful to consider that I may have been wrong so I rather stick with my story before I loose my face.’ Strange, that faith and face have nearly the same phonetics. Enough said.

If we now consider that maybe the universe as a whole is a complex adaptive system that creates itself through non-causal emergence you will find many scientists opposed. For them the discovery of THE (causal mathematical) LAWS has become their faith. But even the invention of probability calculations cannot describe the phenomenon of emergence. There simply seems to be no probability to emergence because it does not allow reductionism. Scientists can and do not want to loose FACE, much like those religious nuts. They fight back by ridiculing people who cannot understand the complex and utterly artificial language of mathematics that they created. They have no proof that this language is in any way related to the way the universe works. They claim that because it can probably approximately correct describe some phenomenon that this must be so.

Let me take one more step back. Why do people exhibit those traits of searching for causality? You may be surprised, but the human ability to recognize patterns and link them causally to our well being is a drive not much different to nourishment and replication. Therefore the essence of human rational thought is linked to causality. It is a much more energy efficient form of survival than genetically transmitted fear of unsafe situations or dangerous animals. Biochemical emotional steering works a lot better if it is linked to repeated patterns. You only feel fear in certain situations and not all the time. Large sharp teeth equals dangerous! Even if you never saw exactly that animal before. We unconsciously are continuously at work to discover those causal patterns of life that make us feel safer. While in our younger years happiness is much related to learning, over time it is replaced by the feeling that the world around us makes a lot of sense. We start to oppose change, we start to ignore and deny change and propose that those who do it differently must be wrong because it all worked well before. Sounds familiar? A young university graduate who sat on my table in a discussion about the causes of the current financial crisis was listening to my explanation of a complex adaptive system with interest but a worried face. Eventually he said: ‘What you explained makes a lot of sense, but the staggering complexity and our inability to control it is deeply worrying.’

The whole concept of education and teaching is related to transmitting those learning experiences to the next generations. Our ability to learn would however be completely useless if there would not be some drive that makes learning a pleasurable experience. Therefore evolution (!) gave us a biochemical reaction to a positive learning experience in that the emotional center produces dopamine when we succeed in something or understand something. When we search for a solution to a problem we trigger brain areas that are emotionally linked to dopamine excretion. It is a much more effective form of search than hash tables, Btree or RTree on abstract terms that then have to be interpreted. The current problem situation patterns excites the neural memory network and the dopamine lights up the patterns that were successful. Therefore our ability to solve problems is not related to learning but solely to our own positive experiences. Therefore you will find that university graduates with a lot of abstract knowledge have no idea how to use it until they run into a learning experience themselves.

Even our own knowledge emerges from the background of patterns and experience. If the learned pattern is not exposed to a receptive environment it is meaningless. The reason for my excursion into psychology and neurology was necessary to make you aware why we are searching for causality. I propose that this is however a fallacy. There is no ultimate causality that can be expressed in formulas and logical Boolean rules. There are patterns that are related to certain other patterns appearing in a certain sequence on a classical physical level, but that is just a small fragment of chemistry and physics. We can try as hard as we want but we can not build an aeroplane that will CERTAINLY not crash. Building a safe airplane has to do with making it just light enough so it will still get off the ground and just complex enough so it will not fail during the time it should be in the air and pilots can still control it. That is related to probability but has no certainty. If an airplace crashes the TRUE cause can never be identified, but certain causal patterns can be used to reduce the crash probability to some extent.

As a next step we need to look at causality in complex adaptive systems.

Filed under: Complex Adaptive Systems, Proposals, Thought Experiments, ,

Questions and Answers

I do not see myself as a scientist, I am rather a philosopher. Some claim that it is the role of philosophy to doubt and question but I see it as the role that asks the questions that the scientiests should answer. Philosophers can also suggest answers and request that scientists proof or disproof them.

The best scientists are the ones who are humble and philosophers. The worst scientists are the ones who believe they are something special because they gained some deeper understanding into some specific aspect of some theory, may it be general relativity, quantum mechanics, quantum field theory, string theory, loop quantum gravity or the Bible.

Let me point out that mathematics does not prove anything and therefore nothing will ever proof any kind of universal theory except that we may gain a gut feel of ‘THAT COULD WORK’. Mathematics is a language that conveys a limited understanding to some limited aspect of some limited area of research. No more. Many mathematical methods are questionable tricks, such as renormalization. Many models require intermediate mechanisms that are completely unrealistic but seem to produce a plausible result of sorts.

That is nothing new. Long before modern science used mathematics philosphers were using imagination, as did Einstein in his ‘Gedankexperiment’, to find answers to troubling questions. One of the key questions already discussed in ancient India was whether the universe was built upon a continuum or a quantized grid. That core issue is as of now unresolved.

In classical physics (Newtonian) we look at values only on a larger statistical scale in Euclidian space and not at units of ONE. Looking at all aspects of ONE is not possible, hence Heisenberg’s Uncertainty Principle. What we can measure IS ACTUALLY all we can know and we need to utilize one interaction with an entity to measure the other. That is however not a measurement problem but it is reality. Observation is interaction. If something does not interact with something else, then by all principles it does not exist. This is a proposal that Einstein was not willing to accept.

The conclusion is in terms of philosophy simple. Everything has to interact with something else at all times to exist. Light-quanta (photons) that travel lonely and non-interacting through space would be lost to this universe once they stop interacting. Quantum mechanics faces the problem of interaction in that the paradoxes only appear when you want to use an outside observer that is not part of the quantum system. It also puts the issue of causality on a new footing, because the observer is a cause! I propose that interaction is not cause and effect with time passing but that energy can be observed as time and/or distance.

That opens up questions of what the vacuum of empty space is made of. What topology is it built upon? What is time? Where does the energy of this universe come from? What is causality? None of these questions can be answered without considering cosmology. An important consideration is today the concept of emergence, in which we leave the path of reductionism to find ever smaller causal building blocks. Phase transitions with symmetry breaking changes of topology seem to play a major role in forming the layers of the universal structure.

What if anything could a) carry the timeless, non-local force of gravity and propagate the timeful, local electromagnetic waves? A plausibel proposal is that each quantum oscillator carries its own little universe with time and space around. Topological phase transitions would have to manifest the entities and forces of the standard model or quantum field theory by emergence.

As you can see we are nowhere nearer the answers, we are just finding more questions!

Filed under: Questions, , , , ,


I am the kind of person who does not simply accept ‘that’s the way it is.’ I am always searching for an understanding that resonates with my own perception. I am not a ‘proper’ physicists or mathematician, meaning that I have not bothered to get a university degree and a doctorate. But while they had the doctorates, neither Einstein nor Niels Bohr were at the time of their greatest discoveries accepted scientists and both were as often right as they were wrong.

I love to search and contemplate and discuss the seemingly ridiculous. So feel free to flame or ridicule me. I know that only dumb people are certain about soemthing. The interesting aspect is that we can not prove this universe from the inside. You can not describe a system perfectly with the means of the system itself. This is cleary experessed in Heisenberg’s Uncertainty and Gödel’s Incompletenes. All our knowledge will always be both uncertain and incomplete! That is actually kind of comfortable to know. Retains the mystery!

If you are a believer in some kind of supernatural, godly power you obviously chose the easy way out. Good for you if it makes you happy.

If you want to search and contemplate with me and you can be happy with uncertainty and incompleteness then WELCOME!

Filed under: General


Blog Stats

  • 7,279 hits
Acronyms: QM - Quantum Mechanics, GR - General Relativity, LSP - Lightest Supersymmetric Partner, EPR - Einstein-Podolsky-Rosen experiment, QED - Quantum Electrodynamics, QFT - Quantum Field Theory

Blog Calendar

May 2017
« Jul    
Welcome to my Quantum Philosophy blog. This is a place to present and discuss my ideas and questions, perform thought experiments and share. I humbly accept that we are unable discover the true nature of our universe from the inside. Please add your questions, comments or proposals. Thanks for visiting! Max J. Pucher


All copyrights by Max J. Pucher. You are only authorized to replicate posts if unchanged and left intact with all hyperlinks.