1. Introduction

Many well-understood paradoxes double as no-go proofs for a mathematical monist interpretation of reality that conclusively eliminates all perturbative, local, realist, background-dependent theories. Following seeming contradictions like the Doomsday paradox, Bell’s theorem, Zeno’s paradox, and circumstantial evidence from Boltzmann brains and Feynman diagrams, only derivatives of theories like causal dynamical triangulation remain as candidates which are reinterpreted as renormalized approximations of a timeless conglomeration of all possible intersections of mathematical and amathematical statements reinforced and excluded by their cardinality, bootstrapping our universe and its stable physical laws from nothing.

Already, thought experiments are eliminating many interpretations of quantum mechanics as nonsensical and other historical paradoxes like the ultraviolet catastrophe show the power of simple, isolated contradictions’ ability to demand reform of entire views of reality. Other paradoxes are introduced along with attempts to resolve them.

While many philosophical ideas successfully unravel the existence of time, space, and locality, this paper will attempt to do so using computational analogies and, more uniquely, reconstitute our particular observable reality, complete with its probability of existence from nothing other than the universe’s true “substance,” which is argued to be probability (or perhaps our ability to consider the axiom of choice). By positing a more hairy universe than even a level 4 mathematical universe, we gain additional tools to tame the measure problem and conclusively disprove both a physical reality and the simulation hypothesis.

Most theories seek to determine the nature of our particular universe, implying it is one of many possible and bootstrapped by specific laws acting on real, extant structures. Instead, we posit using mathematical monism that almost no other hypothetical universe could contain our observer moments.

2. Table of Contents

3. Abstract

Observer moments must necessarily exist in brain states and not in the transition between states. Following that, space, time, and matter not only do not exist, but cannot exist. The perceived universe is reconstructed from a superset of amathematical universes without these using a proposed solution to the measure problem involving the continuum hypothesis and a compounding of observer moments from multiple now-tiled realities while also excluding Boltzmann brains. Also gained are solutions to both the Doomsday and Zeno’s paradox, which are intractable in a physical universe with time. This explanation also suggests the two-state vector formalism interpretation of quantum mechanics with an intuitive explanation of its validity and a graph-node model of space similar to causal dynamical triangulation but without the requirement of time.

Conversely, this can be interpreted as a proof that consciousness must not exist in the states themselves, otherwise time and space cannot exist.

4. Expert summary

Causal dynamical triangulation is a renormalized approximation of an underlying mathematical monist theory. A stochastic model in which all nodes generate all other nodes and discarding nodes and edges which occur a lower cardinality of times, the constraints of both a time-origin plane and required edge coincidence can be discarded, fully generating observed spacetime without realism, locality, or time. TSVF resolves the horizon problem by having observer moments serve as the root or origin rather than the Big Bang.

CDT does the heavy lifting by providing a non-perturbative, background independent, eternalistic model that fulfills all the philosophical requirements mathematical monism requires to fully represent reality. Rather than a theory itself, CDT is the visible remainder after renormalizing all possible structures and discarding those with a lower cardinality as those observer moments “jump” to higher cardinality structures.

5. Hard Problem of Consciousness

The hard problem of consciousness is assumed to fall within the narrower constraint created by current scientific consensus. At most, this is the “antenna” model. There appears not to be any first-cause initiative or decision abilities of our nonphysical homunculus which can influence its physical substrate nor any insulation from physical manipulation (TBI, TMS). All low-level data processing abilities are duplicable by generalist models (Gato) and high level decision theory has obvious evolutionary influence.

Qualia seems entirely associative. “Red” existing independent of its association is less compelling when replaced with a word like “lofty” which arguably has the same qualia, but whose qualia is nothing more than its position in the network of associated words.

The hardest problem in this model is the nature of the entity that can assemble successive moments of our static block universe into an experience of time. This will mostly be sidestepped by assuming the existence of such an entity, whether physical or nonphysical, and confining it to observed laws of probability.

6. Claims

6.1. Observer moments can exist in a Turing Machine

6.1.1. Animal brain duplicated in stateful computer

Researchers have created a complete computer model of the brain of the roundworm C. elgans. With only 302 neurons and less than 10,000 synapses, its brain is one of the simplest of all animals. The Open Worm project simulates the entirety of this organism using a Turing-complete programming language, Java. Inside the computer, any given state of the worm exists as a series of 0s and 1s. As the computer’s clock cycle advances, other 0s and 1s which describe instructions to perform alter its current state to create a new state, also represented by 0s and 1s. These advancements of state represent time, with each one being a moment of its life.

6.1.2. Neurons can apparently be duplicated

However neurons and synapses work, their function can apparently be duplicated by the manipulation of static, individual states. The connectome represented in the computer approximating the worm behave comparably to the physical animal. Further convolutional models of neural networks show aptitude comparable to humans for recognizing images and sound and playing games.

It is assumed that this scales up. A computerized representation of a rat would behave as a rat, despite the much greater connectome complexity. The increase in complexity is quantitative. There aren’t new or different kinds of neurons, just more of them with more connections. There’s no reason not to believe this wouldn’t scale all the way up to a human brain, including all of our information-processing artifacts such as conscious experience.

Clearly define state and state transition

6.1.3. Simulations cannot perceive pauses in simulation

If the computer worm simulation is paused and then resumed days later, there is no malfunction or glitch. The worm continues from the previous state as if no pause occurred. If the worm had the ability of conscious self-reflection, it would be unaware of this pause, even if it occurred mid-thought. If it did experience something unusual, the simulation would not run identically. Any change in perception of the worm’s mind, or “thought,” would register as a different state somewhere in the simulation. For the worm to experience a pause and copy as different than a continuous run-through, the simulations would have to differ, otherwise by definition, they are having exactly the same experience.

Computers do this (context switching) thousands of times a second to all simultaneously running programs. First, their state is saved. Then, they are removed from memory. Another is pulled from memory and resumed, only to have the same thing done to it a millisecond later. Programs don’t need to be written in a special way to support this, nor are they “aware” of it, for the most part.

6.1.4. Simulations cannot perceive rate of simulation

We can even run the simulation at different speeds with no effect on internal state (“thoughts”) or behavior. The worm isn’t panicky if simulated at 100x speed or sluggish at 1/100th speed. It goes through identical states, as its environment is scaled faster or slower at the same rate. If the worm were self-aware, it would not perceive any difference in time regardless of its simulation speed. To argue that it is possible for a simulated organism to perceive its simulation speed, you must argue for a non-deterministic element in the simulation. To perceive varying speed, some states would be altered when run faster or slower, representing varying perceptions or “thoughts” about the speed. That all states and the outcome is identical regardless of the speed or pause time is proof that the worm is unable to perceive, in any way, its simulation rate.

6.1.5. Simulations cannot perceive copying or moving

One could combine both pausing and copying the simulation to no effect. Run one cycle, pause, copy the new state to a new computer, run another cycle, pause, and so on. Following the conclusions above, this worm still experiences a complete, normal life in normal time identical to the state changes it goes through when run in real-time on one computer. All states and state changes remain intact.

6.1.6. Simulations cannot perceive stepping

Further, we could even pre-calculate all states, say 10 seconds worth, and execute the thousand states (100Hz) over the course of only one second. Start 10 separate simulations beginning from the nth second of the original simulation at once. Even if you believe the magic happens in the state transitions, most transitions are occurring here and the being is able to live a normal 10 seconds of life in a single second and cannot distinguish that it is doing so in 10 distant universes simultaneously. Even in our universe, this could be subdivided to the point that the entire universe could execute in Plank time and we’d experience the full 13 billion years, if spread across enough universes.

There are two ways in which the actual experiencing could take place. It could be in the transition between states or the states themselves. I argue that it is in the states themselves and the transition is immaterial.

This allows us to experience billions of years in a universe that exists as only a flash. A counterargument is that the experience is actually occurring in the calculation phase. If that is the case, what happens when the simulation is executed the second time, after the calculation is complete? Inspecting the states, we see a complete lifetime, with memories from the first second persistent in the 10th second, despite them perhaps being remembered before they actually occur? What if we run the calculation in reverse, computing previous states from their endpoint? Since this universe appears time symmetric, we know this should be possible. From the organism’s perspective, does time still run forward? Its memories at each state still reflect a forward progression through time. Assigning conscious experience to the state transitions or calculation phase creates many intractable paradoxes.

A note on what gets the worm from one moment to the next. The computation portion which allows the worm to change state over time is effectively the laws of physics simplified. Instead of electron forces, inertia, and gravity acting on the worm’s neurons, programmers approximate the effects of these forces over time and simplify. If one neuron fires, instead of calculating the position of atoms as they travel along synapses, assume the next neuron fires, and so on. The instructions operating on the virtual worm are analogues of the physical laws of our universe.

6.2. Observer moments not in state transitions

6.2.1. 3 possible seats of consciousness: states, transitions, or weaves them together

The experience of consciousness and the passage of time must occur because of one of three of the following:

  1. Individual states
  2. Transition between states
  3. “Antenna” model

The traditional model which allows for the existence of time is that as one state transitions to the next, an observer moment occurs. This avoids many issues which seem counterintuitive like all moments occurring simultaneously and timelessly as well as providing an anchor for the triangular outbranching in CDT (I assume).

An argument against this, which works against both states and transitions is the type of transition is immaterial to the product states. If we assume that computing the state of a brain 1 minute from now which is identical to the state it ends up in anyway, the amount of variability in methods of reaching it is immense. We could compute it with bits or use paper or people standing with their hands in different positions to represent neurons. The step resolution is even variable, yet the same states occur between and at the end. If the end thought contains “I feel like 60 seconds have passed” regardless of the simulation rate, that’s evidence against rate being material to the problem.

6.2.2. Time model: consciousness occurs at boundary between past and future

Instead of letting the computer calculate the state changes using its formula, we can do it on paper, writing down any previous state, applying the instructions, and getting a new state. We can do this from any state to get the next state. Let’s do it instead of letting the computer. Read the state off one computer, calculate the next, and type it into another computer. The worm still experiences a full life, normal time, and no spatial jumps, despite its existence being only a grid of static 1s and 0s sitting on many different computers. [This isn’t a great example]

What if I have a great memory and flip the bits myself by hand to their next state, without calculating? Does the worm still have the experience of the passage of time? Did its experience occur inside my brain, but are inaccessible to me as I only see them as 1s and 0s?

If self-aware, is the worm only experiencing the passage of time as I crunch its next state with paper and pencil? We know all its states are invariant regardless of which method we choose to calculate them. That is a strong argument that its consciousness is contained in the static states. If the consciousness existed in the state transitions, which are highly variant, we would expect the worm to have different experiences depending on how the state transitions work. Since the states contain its “thoughts” and behaviors and do not change, it is fair to conclude states are its true thoughts.

What happens if we do the math wrong and go back and correct it? Does it experience an anomaly and then “unexperience” it when we erase it as if it never happened, like we’re wiping its memories?

This model is discarded because of the logical inconsistencies it creates.

Counterargument: this is sleight of hand and thoughts are actually the state transitions. You can change how the state is represented and make the same argument in reverse.

6.2.3. In the traditional model, consciousness exists where the computation between states occurs

Consciousness is traditionally considered to exist in the transition from state to state. As the machine computes a new state, something about the transition between one and another, and not the states themselves, give rise to the sensation of consciousness. If you run a simulation of a worm, or a human, the animal experiences the passage of each moment as the computation of one state turns it into another state.

There’s the state transitions and state transitioner. The transitioner is nothing more than a moving head and a couple of rules, like a Turing machine. Its simplicity makes it a bad candidate for the seat of consciousness. There’s also the state transitions themselves. I don’t understand what they really are, but I need to investigate if I want to argue against them.

6.3. Time Can’t Exist

6.3.1. Storing all states creates a timeless, continuously looped experience

The only alternative to this is that the transitions are immaterial and consciousness somehow exists in the states themselves. If all the worm’s states are stored somewhere, then the worm experiences its entire life not just over and over again, but always. It is always experiencing every moment of its existence. This implies the static 4-D block of all spacetime exists and our experiences are an immaterial wave flowing through it. The wave doesn’t occur over time. The wave is fictional time being generated by the existence of states, that if transitioned through, would generate a time-like experience.

The generative function that creates future from past could just as easily reorient time into a spatial dimension. It would be a much more complex generator, but if it could generate this block of spacetime from left to right rather than past to present, we would experience time “flowing” in that direction and what we previously called time would be a spatial dimension.

Do they even need to be written down in order? It’s not the physical proximity of bits that determines their next state or how they’re operated on. If we simply write down all possible combinations of 0s and 1s sequentially, will a time-like experience self-assemble, hopping between states as needed?

6.3.2. A timeless, static, continuous experience is indistinguishable from a single lifespan inside of time

It will still experience time normally. There’ll be no mix-ups or confusion. Each state represents a particular time and the being from a day ago reliving that day or from tomorrow experiencing tomorrow will not interfere with the you now experiencing now. Each state only carries the memories of its past and its own now. It has no knowledge of the others or even of itself occurring continuously. (This probably needs a lot more explanation. I find it intuitive but it seems like a big leap.)

6.3.3. Zeno’s Arrow paradox resolved

Since only states give rise to the universe and observer moments and the transition between them is immaterial, motion is not necessary. Of the proposed solutions to Zeno’s paradox, only the infinitely many motions of infinitely small distance makes sense, but it doesn’t work for us in a universe with Plank time and distance. At any moment, anything not at motion is simply not at motion. Nothing not at motion has any ability to transition to another state, and if motion did exist, which it can’t, there’s no way for something to move. Atomicization is no resolution to the paradox. Objects teleport Plank length at a time instantly, at an infinite speed and stop each Plank length?

Claims that time is continuous construct with no instants contradicts what computers have shown us about consciousness. We may not know what it is, but we have no problem simulating brain-like information processing using stateful computers which represent one moment at a time as a static array of 0s and 1s. If brain-like processing were something that specifically occurred in a universe with real motion, we’d be hitting some very odd roadblocks in programming neural networks that recognize sound and images using stateful machines.

Problem: moving to time as a series of static states turns momentum into a hidden variable, mentioned below with citation.

6.4. Space Can’t Exist

6.4.1. Space is also unnecessary

If all that’s necessary to experience a complete lifetime is states laid out sequentially, for example on a computer hard drive, why does the sequence matter? Can we rearrange the states in a different order and prevent the being from experiencing its lifetime? If so, how? If the experience of existing exists in the states, how does it track through space? There’s no hard drive needle moving from one state to the next. No activation is required. No computation between them. It is only the existence of the states that creates the timeless, continuous experience. How would their spatial position affect that? Can I move one bit from one state to another galaxy and interrupt its existence? We know from other examples, like the pause, move, resume, that experience is not changed by position. Why is the proximity of one bit to another relevant to the experience?

More importantly, what is tying nearby bits together other than the rules acting on them using nearby bits? If the bits are redistributed but still could represent conscious states, we are left with a breakdown of the entire concept of locality. This is the intractable position of using states rather than their transitions and the unraveling of realism.

Consciousness seems to be a fundamentally nonlocal phenomenon. With no senses, the experience of existing is not necessarily localized to any one place. To use Dennett’s example, if billions of people in China exchange cards in the pattern of neurons firing, where is the consciousness? Hovering over the country? It’s not a sensical questions. Only when paired with sensory devices like eyes and ears can consciousness localize itself. [recent paper that the United States could be considered a conscious entity]

This brings up some strange problems with locality. If the proximity of the bits of the states is irrelevant, why do we experience moments inside a brain with highly localized states? Our consciousness did not land in a rock which contains a pattern of molecules with spin orientations corresponding to the 1s and 0s of this state. It’s inside a wet, but largely traditional computer, computing away. Locality appears to play a large role in our existence and is highly structured. (this is a different argument and doesn’t really follow)

It is possible the pattern of our conscious experience does exist in rocks’s electron spin, star configurations, the handing of papers around China, and solar storm vortices, but ascends upward to larger, more complex structures as those have histories which generate its past and future until it “lands” in a meaty organ far enough removed from quantum effects to persist those firing patterns forward in time without disruption from the underlying randomness levels below.

For every conscious experience, can a universe which hosts that experience be generated? Conversely, if a moment that cannot generate a universe, is it not experienced? Is a fundamentally nonlocal Boltzmann brain experience not able to generate a history and future which generates itself? Our experience is complex enough to have needed to generate 4 billion years of reverse evolution in a 13 billion year old universe in order to have these thoughts.

6.4.2. Mathematical Platonism

What if instead of writing down all the states, we simply write 1 and 0? What is to prevent the states from using those bits and duplicating them as needed? The state 11001011 can get its 1s and 0s from there and still exist in the same way as if it’s written down. 1 and 0 are simply ideas. If we don’t write them, they still exist. If we accept this, space and locality are also unnecessary. The states are like a timeless, endless soup of math that permeate a void. In the way that 2+2=4 always exists, regardless of whether there is a universe for it to exist in, 1s and 0s always exist, and so all combinations of states and their intersections always exist.

The reason we invented math the way it is, with equalities like 2+2=4 is for 3 reasons. Conservation, substitution, and nesting. All three are useful when building up space from a soup of nothing infinitely deep.

6.5. Amathematical Universe: locality doesn’t exist

Like Tegmark’s argument for a mathematical universe, I also believe that all mathematical structures exist and are all that exist. Further, I believe all nonmathematical structures exist. If 2+2=4 is represented by the string (SS0 + SS0) = SSSS0 in TNT, then S0 + S0 = SSSSSSSSSSSSSSSS0 also exists. Further, I believe that inconsistent mathematical structures can embed themselves in mathematical structures in this timeless void, creating a horrendous mishmash of every possible true and false statement. That is, a mathematical universe generated from a simple equation repeated on itself also has false statements “injected” at every possible opportunity. Not only that, it has all possible false statements injected at every opportunity.

This makes the amathematical universe much harder to defend, but also avoids assumptions of beauty or simplicity that have no arguable basis other than in reverse (look how simple our universe is, it can’t include amathematical structures). In exchange, I will put forth ways in which existing theorems and observable properties reduce these combinations of amathematical structures to the more manageable straightforward universe we observe.

6.5.1. Math still exists in a void

(duplicate of above) Instead of a universe with time, space, and matter, we are left with a void, nothing anywhere. Even in a void, mathematical truths exist. Imagine an endless soup somewhat resembling Hofsteader’s TNT. If 2+2=4 is always true, regardless of our existence or the existence of this universe, then why isn’t (2+2)+(2+2)=8? Truths that join together are still just truths. A truth assembled from smaller truths has a corresponding complex unassembled truth. In a way, this combining of truths “always” and “continuously” “everywhere” “creates” all possible and impossible universes, at all time. Everything exists and nothing exists.

While all possible structures of all type, even 2+2=5 make up our reality, only those which can generate each other end up part of our universe due to observer moments ascending to realities of which there are a greater cardinality. A node made up of 2+2=5 may nest, intersect, join, or whatever with another imaginary node like 3/2=7 and other nonsensical nodes, but 2+2=4 joins with 4+4=8 and nests an infinite number of times never “corrupting” others. This gets impossibly complex very quickly, but we do have some hints that limit the type of structures that can generate each other and our eternalist block of spacetime.

Only real, complex, quaternion, and octonions can be added, subtracted, multiplied, and divided (1898 proof). Each step, we lose a feature. First, well-ordered disappears after reals–complex numbers not having that feature. Then, commutativity disappears with quaternions and octonions lose associativity. This prevents the further, more complex numbers from participating in the generation of our universe by way of being unable to “reinforce” each other.

all multiplicative chains of elements of R⊗C⊗H⊗O can be generated by 10 matrices called “generators.” Nine of the generators act like spatial dimensions, and the 10th, which has the opposite sign, behaves like time. Sedenions can still be added, multiplied, subtracted and divided. it's just that multiplication and division lose most of their useful properties. With octonions you've already lost associativity and commutativity, though.

The main superficial distinction between a physical and a mathematical universe is substance. A physical universe has substance, while a mathematical one is a potential, or idea. The differentiation of a substantive and mathematical universe can be broken down into two components: conservation and self-interaction. Conservation is quickly becoming discarded as a hallmark of even a physical universe. Whether the universe is infinite in size or there are infinitely many physical universes, conservation is off the table. What would it add if we did demand it? Second, self-interaction. This is the most important part. A physical universe has objects bumping into one another and bouncing off. We don’t imagine mathematics to have this property. However, in the simulated worm, mathematics perfectly duplicates it bumping into walls. A subatomic particle event could be represented by an equation like 1+3=4, then 4=2+2. When combining, the truths in essence interact with one another in the same way as “physical” particles. Also, as we’re able to simulate portions of a universe on computer with great success, we’ve shown whatever physicality we consider to be critical for a “real” universe to be directly substitutable with electronic representations which have none of the locality, material, or movement patterns of the physical objects they’re standing in for.

6.5.2. Not just true math, but false, inconsistent statements also exist in this soup

Most formulations of a mathematical universe do not consider that math “external” to that universe can interact with math inside and cause problems. I do not exclude this possibility, though it makes a mathematical universe harder to defend. I also do not exclude the possibility that inconsistent, false mathematical statements can interact with true ones, causing most universes to be inconsistent and not follow physical laws. Instead, I will resolve these problems using a combination of the continuum hypothesis and a superposition of past and future which also explains the arrow of time.

6.5.3. A mathematical universe excludes a physical universe

Accept the existence that mathematical tautologies exist in a void necessarily invalidates the existence of a physical universe. When considering how a physical universe differs from a mathematical one, I imagine two things. One, a physical universe has components that can interact with itself. Two, a physical universe’s components are conserved in some way. For the first, tautologies in a void should have no problem coalescing and interacting with one another, as those are just more complex tautologies that would come into existence on their own anyway, as all possible combinations are always extant. Two, by making the only difference between a mathematical and physical universe the conservation of items, observer moments can no longer occur in the physical universe via the continuum hypothesis.

6.5.4. Mathematical universes lift observer moments away from any extant physical universes

For each physical universe, there are infinitely more mathematical universes that are almost identical to that physical universe but jumbled up in infinitely many ways (Cantor’s diagonal argument) while still containing the same observer moment lifetime-states as the physical universe. Since both that physical universe and the infinite recombination of mathematical universes both contain the same consciousness lifetimes, the continuum hypothesis forces all consciousness in the physical universe to “land” in the analogous, infinite mathematical universes. Just as above we proved you could pause, copy, and move consciousness undetectably, these mathematical universe do the same to the observer moments in the physical universe. An infinite number of copies is made and “lifted” to the infinite mathematical universes, forcing all those observer moments to land in a mathematical universe even if a physical universe with 1 such moment “really” exists. By the continuum hypothesis, the physical universes, if they did or could exist somehow, could have no fraction of observers. Without it, whatever number exists between cardinalities would be the fraction of times the mathematical universes “stole” observer moments from physical ones. Its independence from ZFC and universality may be a necessity.

This may prevent both simulation and physical realities by additional methods. Simulations require a limited number of states per time period. A simulation which produces 1 second of observer experience through a million cycles still produces infinitely fewer slices than the continuous mathematical structures which exist at all possible (uncountably infinite) states in between those calculated.

Given this, it would be impossible for observer moments to land in a discretely stateful universe, like one that would exist if Planck time were interpreted as the minimal unit of time.

6.6. Continuum Hypothesis

6.6.1. The continuum hypothesis prevents a fraction of observer moments from landing in physical or lower mathematical universes

The continuum hypothesis, which very importantly for our purposes exists outside of axiomatic set theory, states that there are no fractional differences between infinities. If there are 1 or infinitely many physical universes, which by definition are conserved in some way (or said another way, their quantity or the quantity of things inside them is limited), then the infinite combinations of mathematical universes will always “lift” these states into their purely mathematical realm. Since the continuum hypothesis prevents any counts between cardinalities, we can be guaranteed all observer moments land in mathematical universes with no “fraction” of them ending up between cardinalities or in physical universes.

6.6.2. CH also prevents observer moments from landing in a simulated universe

Mathematical universes do the same thing to computer simulated universes, invalidating the simulation argument. Each observer moment in a simulated universe is “lifted” away to infinitely many rescrambled but almost identical mathematical universes.

Oddly, we can build a simulation which could contain observer moments and those observers would behave as if they were fully conscious, but for each observer moment generated inside the simulation, infinitely more analogous moments would exist as mathematical universes with some underlying soup or noise to increase their quantity. So those simulated beings would instead find themselves inhabiting a world like ours anyway. But then if we simulate them with enough granularity to examine the subatomic realm, they should find it classical all the way down. This is a paradox introduced by this theory.

(Can the void create loop-like links so long as they are not exploitable? Can a particle in this universe contain a “link” to the entire universe and place it inside? Same question for a black hole.)

Without the continuum hypothesis, one could find a non-infinite fraction of mathematical universes between cardinalities. If the naturals represented one set of universes and the reals represented another, a cardinality halfway between those would cause that fraction of observer moments to land in between these two sets of universes rather than always landing in the higher cardinality ones. As long as there are infinitely more of one universe class than its lower cardinality, observer moments will always land in the higher, with no proportion going lower.

We could arguably not have consistent experiences without CH. I’m not sure what it would mean or feel like for twice as many of our moments to land in a higher cardinality universe and the remaining 1/3 land in a lower. Why would that scramble us? Is it possible to envision CH being false and that preventing any consistent observer moments where CH is false or in the entire void if CH applies regardless of set theory chosen?

6.7. Boltzmann brains

6.7.1. Most of us should be Boltzmann brains

We are left with a nonexistent universe that due to the unending combinations of math, assembles moments of existence for every possible being. Since the moment you are experiencing is the result of some possible combination of TNT strings joining, it exists. This is the Boltzmann brain problem. Absurd nonsensical blinks of consciousness should dominate our free-range math void. With so many Boltzmann brains springing into existence continuously, your experience should be one. Boltzmann brains are traditionally a problem with an infinite universe, but they’re just as big an issue if not more for a mathematical one.

Of all the Boltzmann brains possible, there are many other options for us to land in that aren’t a mysterious void. We could be the fluid dynamics inside a star, the spin of atoms inside a rock, the passing of papers of people in China, the gravitational attraction of stars in a galaxy.

Not sure if this is the same as Donald Hoffman’s idea, but for us to have a sequence of many observer moments that are coherent we need a substrate to interact with that allows us to generate further moments. Landing in an evolutionarily-generated animal whose goal is to keep existing and further observe is a great opportunity.

6.7.2. Worse, we should jump among all Boltzmann brains

In models that have time or where the “now” sensation is generated by the transition between states, Boltzmann brains are a one-off problem. Either you’re one or you’re not. If you escape being one through probability, you’re set. In this model, they’re a continuous problem that can lift away your consciousness at any and all moments. However, by embracing this possibility, we come up with a better coherent explanation than by ignoring it and saying time is real and the progression in one mathematical structure accounts for our experience.

6.7.3. Instead, our moments have landed in a historied, consistent, conserved physical-like universe

Instead, you can trace a consistent history back to the beginning of the universe and you seem to have a future. We would expect most Boltzmann brains to be flashes of disembodied self-awareness. Not only that, you should not perceive consistent laws of physics. Objects should be popping in and out of existence all around you. There must be a mechanism streamlining the limitless potential mathematical universes to cause us to land in this mostly consistent one resembling a physical, matter-conserved universe.

6.8. Arrow of Time

6.8.1. Past and future are generated from our current observer moment

Experience as we understand it is the passage of time. If reversed, physical laws still apply. This is mostly consistent with the “block of paper” model of the universe in 4-D. With processing between states being irrelevant and instead now arising from the existence of adjacent states, time is an artificial construct. In our model, what “first” exists is now. Past and future come “later.” Since time doesn’t exist, what we really mean by later is that our observer moment is the root “tile” that determines the orientation of other tiles. Regardless, the reversibility of artificial time is necessary to explain our experience as historied brains in a consistent universe.

There are alternatives to reversible time that are still built up from now rather than the distant past. Physical laws could be identical whether going forward or backward in time from now, making the past a reflection of the future. Or, we could have totally different physical laws that apply as now reaches back toward the past vs. toward the future. Instead, we have reversible laws which, most importantly, generate a consistent past and future regardless of what our current observer moment is.

6.8.2. Time’s apparent reversibility compounds observer moments to give us a consistent history

The reversibility of time is what allows our consciousness to land in a universe with a consistent history. Your current moment generates a past all the way back to the beginning of the universe and simultaneously generates a future all the way toward its heat death. This moment is the root tile placed on a blank board. The shape of the tile, analogous to physical laws, allow past and future moments to be placed afterward forming a light cone in both directions. All future and past moments along this path “simultaneously” generate their light cones as well. They are not necessarily in the same universe or related to each other, as no physical reality exists.

Loopback compounds moments

There may be a loop-around effect that assists in compounding node trees generated by octonions. Looping would be expected at the physical edges of a universe (many tilings disproven from CMV), the time dimension (repeating big crunch), and novelly, the scale dimension. Deep enough, we should find our own universe embedded in elementary particles or their significantly smaller constituents, similar to the fecundity argument made about black holes in our own universe generating others. Observer moments in a universe with looping should exist “more” times (depending on how moments are “counted” by consciousness) than a universe with tapered or rough endpoints. Are there ways of scale embedding that allow this universe to be truly identical to its embedded/linked duplicates?

It is odd that the smallest and largest scales of this reality appear to differ by only about 200 doublings, with us about in the middle. 200 is an exceptionally low number. In fact, it is the 200th lowest number in existence. Further, we’re about 100 doublings in both scale directions from being unable to influence events at any level of technological advancement. This is strong circumstantial evidence that the compounding effect to generate universes is as tight as possible on the scale level that can still avoid causal interference between the largest and smallest levels.

6.8.3. Update: Spacetime block generation can be independent of time sweeping

It’s normally thought that time is the operation of physical laws on the current moment to create the next moment. Time generates new space from this space. In our model, we take the same concept and instead argue this moment generates future and past moments. However, since time is a nonexistent continuous sweep through a spacetime block in our model, it’s not necessary that the generation of the block proceed from this moment. The block can just as easily be generated from the end of the universe backward as the start of the universe forward or from now in both directions while time sweeps through it continuously.

Most likely the sweep “starts” or has a tiling context “beginning” with a single node of a Feynman diagram and propagates outward from there. This requires all diagrams to be linked together rather than separate sets of particles which do not interact with each other, which could be considered circumstantial evidence for the theory but if there were classes of particles that did not interact with others at all, we would only be aware of the ones that make up ourselves.

This is why Feynman diagrams can be rotated through time and still be equivalent. The node network of our spacetime block expands outward and is independent of the experience of time sweeping through the network. Further, the virtual particles of QED (which are far more revelatory of a mathematical multiverse than QM or its many-worlds interpretation) participate in a universe-level node relaxation to determine their masses. A purely mathematical network of Feynman diagrams that make up our spacetime block universe can relax instantly and leave us with the most stable (best at self-generation) block, with mass values for its fundamental particles that are the final result of this relaxation.

This is why renormalization works–it attempts a localized approximation of the universe-wide node relaxation that generate mass values. It also implies renormalization will not be bested (and has not for 100 years) by another technique, nor will easily-derived mass values be found, without a technique that acknowledges the very edges of the universe (in both time and space) participated to create the fundamental masses we see.

Renormalization/regularization is also seen to introduce an unsolvable mystery to which no progress has ever been made. Why does an almost perfectly predictive theory also predict values escaping to infinity which must be discarded for no known reason? In the mathematical monist model, however, this makes perfect sense. The rules governing interactions of “particles” (mathematical structures) in our universe do indeed interact in ways that create infinite amounts of energy. Such an interaction is invisible to us because it occurs in a universe identical to ours but in which that runaway creation of infinite energy made that universe not end up with a self-reinforcing node network which placed it in our cardinality, but far beneath it. Field theories predict all interactions that occur in other universes. Renormalization is how we filter for those that participate in self-reinforcement in our own universe.

As physical scattering angles of fundamental particle interactions are 4D rotatable, angles in space correspond to speed of causality in time. Extremely tiny angles in 4D space would correspond to faster-than-light causality. The speed of light may be the cause or effect of the dead-cone effect, depending on perspective.

This was considered to deal with the issue of now being potentially far more complex than the start or end of the universe, due to the complexity of structures our bit-like consciousness is embedded in. It’s hard to imagine how now can be the starting point when the brain is so much more complex than the consciousness it carries. That is a large mishmash of matter and space to generate.

Imagine your current moment of consciousness existing in a void. It always exists, as well as all other combinations of all tautologies. For the sake of simplicity, imagine this moment as represented by some binary string representing its state: 0101101. The infinite tautologies that are always extant can operate on that state, generating all other possible states. Almost all transform it into noise, but some do so in a pattern. These very few are analogous to physical laws like momentum or gravity. Of the infinite possibilities, very few strictly conserve 1s and 0s and very few of those have an exact opposite transformation that undoes the previous. These are the laws of our own universe which generate past states and future states from 0101101.

With two of those opposing transformation rules operating on our moment (while all other possibilities are operating on it “simultaneously”), they are generating past moments and future moments, spanning “outward” like a light cone. All other rules are producing their own similar cones, but most of those lead to nonsense moments where consciousness does not exist. Others produce valid past or present moments with anomalies like a region with no gravity or intense radiation or you with an extra arm or turned inside out. However, with reversible laws applied to every generated moment, every moment can generate a portion of the universe inside another’s light cone. Now generates a nanosecond ago which in turn generates now which generates a future which in turn generates now.

This compounding is the magic bullet that allows us to deal with amathematical structures in a math-like universe. Your current observer moment is generated by every particle, structure, or node of space that exists in both your forward and reverse light cone, from the big bang to the end of the universe. With your current moment consistently generated by infinite(which aleph?) other moments, and so many more valid moments are generated than invalid (are they, how do we know, can we count like that?), your current observer moment appears consistent and your existence stretches back to the beginning of time, just as in the traditional model.

6.8.4. Boltzmann brains by definition do not get their current moment compounded from past and future

It is the reversibility of time that creates identical observer moments from each now-tiled moment. Your current observer moment does not just exist in isolation. It is also generated by every future and past moment. Boltzmann brains do not have this luxury, which is why they are not a problem for this model and instead our observer moments land in a universe with directional, reversible time.

6.8.5. Horizon problem

With time tiling backward to generate the past from now, the horizon problem is nonexistent. An observer moment inside a stable, long-lived universe with complex features is what generates the dense, hot, uniform universe just before the big bang. Inflation is acceptable, but not at all necessary.

Alternate/complementary resolution: as all nodes generate all other, consistency extends beyond one node’s light cone as it generates other nodes which have their own light cone and so forth, “stabilizing” the entire universe, or conversely, allowing destabilization in one cone to scrap an entire universe rather than be locally significantly hotter or colder.

6.9. Mathematical universe vs. fluctuation multiverse

Both have similarities as they give rise to an unlimited number of possibilities through different means. However, mathematical universes have the same probability of generating a single node or TNT string as an entire universe, while a multiverse landscape must deal with traditional probabilities of particles coalescing. In the multiverse model, a single Boltzmann brain has an astronomically higher probability of forming than an entire universe. In the mathematical model, the reinforcing of nodes backward and forward in time, and from other light cones, actually increases the probability of an observer moment landing in a historied, consistent “universe.”

6.10. Two-State Vector Formalism

6.10.1. TSVF supports the generation of observer moments from both the past and future direction

The two-state vector formalism interpretation of quantum mechanics supports the compounding of multiple now-tiled spacetime blocks. Now, in TSVF, can be inferred from a combination of the future and past moments. Since there are infinitely many more non-now moments generated for us from the past and future than there are of this moment, TSVF makes perfect sense. Each moment is simply an interim state generated between the two surrounding moments, including the current state.

6.10.2. Continuous time requires hidden variables

Without TSVF generating now from past and future moments, we are left with continuous time, something that mutates the past into the present moment. David H. Wolpert, Artemy Kolchinsky & Jeremy A. Owen have shown that continuous time is a hidden variable theory.

“ow does the classical world with its arrow of time emerge from the quantum world where the governing equation is time-symmetric? And the answer is: the classical world emerges by a process of decoherence, which is to say, by the creation of large (O(10^23)) networks of entanglements which (it can be shown mathematically) have behavior that is indistinguishable from classical systems. It is very similar to how thermodynamics and the time-irreversibility of the second law emerge from time-reversible Newtonian mechanics”

6.10.3. Quantum foam and randomness are an artifact of our lifting from a lower cardinality

We could have had a purely classical universe (other than the black body problem), but instead we are in a universe where any given subatomic event has a random component and particles are rapidly coming in and out of existence. We were lifted into this more complex, random universe because for each classical universe, there are an infinite number of identical quantum universes where a single particle is in an infinite number of places. Our observer moments are lifted to a very high cardinality universe with the maximum amount of “noise” it can have without that noise disrupting our conscious experiences.

(We could also be in a quantized universe because Navier-Stokes is unbounded and any true flowing of fields that existed would create infinitely deep, fractal-like eddies which could also bubble back up to the macro scale and consume the universe if several interacted or even if one interacted with itself.) (replaced by below)

An infinite number of Feynman diagrams are needed to calculate any given interaction. One interpretation is that in some real way, all interactions do occur to give the electron its anomalous magnetic moment. A proposed interpretation of quantum fields (which may or may not be compatible with current calculations) is that instead of considering each possible diagram contributing a proportion of the interaction based on how many nodes it has, the declining probability contribution are due to an infinite regress of nodes in which each diagram is further composed of other diagrams in a fractal manner, descending forever. Infinite combinations of infinite particle reactions all resolving at the subatomic level to consistent values places us in an astoundingly high cardinality universe, propped up by a spectacular infinite soup.

If our brains operated on a smaller scale that was disrupted by quantum mechanics, we would find the quantum realm that much smaller. Just enough so it wouldn’t. This is the opposite of most mathematical universe formulations, which place us in the simplest universe of a given observer-moment. Due to the amathematical universes, continuum hypothesis, and “lifting,” I argue instead that we are in the most complex. Our universe is maximally complex, especially beneath the scale required for a biological neural network to function classically.

Exactly how much larger could the quantum realm be before it’d disrupt our neural network? The universe spans ~40 orders of magnitude. If neurons are within 1 or 2 of their behavior being destabilized by QM, that is circumstantial evidence for this theory.

Is it possible this universe jumping is also responsible for the quantum-classical transition? Can experiments be devised that locate this transition in a way that reveals its relation to observer space? Why would it be that the hopping between universes and either the randomness of QM or its foam (distinct things) wouldn’t conform to the stretching and deforming of space from gravity or high speeds?

Further, this random quantum foam we experience is our own observer moments hopping between universes just as the pause/copy/paste worm simulations were. It’s not necessary for the consistency of our experience that every quantum event be consistent, but it is necessary for every macroscopic event to be.

Quantum mechanics is very consistent in some ways, such as how individual particles behave, but perfectly inconsistent in others, like radioactive decay being truly random.

We should be able to design very interesting experiments to tease out observer effects and how observer moments are compounded. If two researchers are working on an experiment, can one be designed that when hearing about it, it fails but when conducting it myself it succeeds? If an unexpected result like both researchers find the same thing when this theory finds only one should, is that evidence of further observer moment compounding resulting from multiple observers compounding each others’ moments?

Why then does launching subatomic particles into detectors just register nothing at all most of the time? If that were the case, it could be used as a perpetual motion machine somehow, as ejection of particles produces a backward force yet they hit nothing.

We should find our universe at some minima and maxima. Just beneath the non-randomness needed for neurons to operate properly, we should find the universe maximally complex, with truly random events occurring as deep as we can inspect without the ability to bubble up and disrupt our brains. Also, as David Hoffman argues, the physical world is indeed generated to represent each conscious experience. I disagree that it’s incomplete and merely an interface consciousness can “land” in and we will not find answers in classical mechanics or 1s and 0s because the physical world is an incomplete approximation that doesn’t fully capture it. Consciousness can generate as complex a structure as it needs to represent itself, and it has generated a pretty fancy brain to live in as well as a sense of time. I think the representation is complete. Plus, we’ve had great success generating our key components like pattern recognition and game playing with artificial structures that run on computers. I disagree on it not existing beyond our perception or as we perceive it. We generated a complete world by generating moments that generate other moments recursively. QM as a seat of consciousness is a misdirection. Using it to say things don’t exist until we see them is another misdirection.

6.10.4. Superdeterminism

As a solution to entanglement. Need to consider. Does it hold up if events are truly random?

6.10.5. Non-locality is expected

Just as now generates future and past, more often, future and past generate now. This is how nonlocality is interpreted. An entangled pair created now propagates forward in time, instantly generating a future. If the other particle is measured, that particle then generates states backward in time which reach the present, which then propagates forward in time again to reach the other particle. This is how particles can “communicate” backward and forward through time with anything in their light cones.

Just as now generates a past and future in its light cone, those pasts and futures generate the space adjacent to now, which in turn generate their light cones. This expands “instantly” to create an entire universe from a single localized observer moment.

There are a number of experiments like the delayed choice double eraser which demonstrate results like this. The past is rewritten to maximize consistency with now, an expected effect of now being what “first” exists and generates the past.

Entanglement is an “exploit” of the rules that create our past and future and appears counterintuitive as the universes generated by each moment overlap enough so that only those with maximum overlap contain moments, giving the same appearance of consistent past and future that helped us escape the amathematical universes.

6.10.6. Locality/Proximity formation

Other universe models have proximity as a native feature. Most mathematical and physical models of space imply that things can be near or far from each other. This is not a feature of the observer space model and must be generated statistically from the intersection of all possible and impossible mathematical universes.

To build up locality from nothing, each “node” of space generates all possible and impossible nodes using all four division algebras. Only quaternions and higher are noncommutative, a requirement for directional time. Nodes generated from complex numbers would be commutative and generate nodes which are identical to each other rather than opposites (representing a forward and backward step through time).

With all possible nodes generating all other possible nodes with quaternions or octonions, an infinitely complex graph is created which still obeys some properties, like non-commutativity. In this graph, a given node will occur infinite times, as many other nodes generate it by applying division algebras to themselves. If identical copies of this infinite graph are superimposed upon each other at the point of our repeated node, I suspect some edges and nodes will appear a cardinality more often than others. Further, and I don’t know if this step is implied by the previous, all other duplicate nodes are similarly overlapped from the same graph and then those graphs are again overlapped with each other. Now, as long as some edges occur a cardinality more often than others, those in the lower cardinality can be pruned, as the higher occurs infinitely more often, preventing their “existence” as universes have a 0 probability of using that node or edge over the higher cardinality.

I suspect this “pruning” of lower cardinality edges and nodes changes the infinite graph from one in which all nodes (points in space) are equidistant to all other points (reachable from 1 edge) into a graph resembling our physical universe with intuitive locality, in which each node is connected to exponentially more nodes via an increasing number of edges (distance). Further, and not implied by the previous, the network is consistent among all nodes, in that each finds itself a fixed number of edges away from every other node regardless of observer.

An attempt to duplicate this on a small scale. Imagine 4 nodes numbered 1 to 4. Starting graph is a square with an X inside. The operator is +1 or -1. Let’s also include operators +2 and -2 for each node to show that +/-1 should win out by creating a tighter graph.

Seemingly violating the CH, that wave functions span the entire universe, just with decreasing probability, is consistent with the emergence of locality from the compounding of edges by count. All points are in proximity to all other points, just with a vanishing probability as paths taking direct links between distant points are less likely than paths going through 1 unit long edges.

If the separation isn’t a full cardinality and instead simply a lot more often, it would explain the probability of tunneling decreasing with distance. Very short tunnels in the quantum range are possible due to edges directly connecting two distant nodes are still somewhat probable paths to take between them. The probability function relating distance to probability of tunneling would actually be expressing the node density and connectivity.

Causal dynamical triangulation creates triangles radiating from current time outward and requires edges of pyramids to be touching to build up traditional locality. Does CDT explain why this is necessary? This theory does. The infinite nodes radiating from both directions of now from each now overlap, reinforcing each other from +infinity to -infinity time. There are only an infinite number of edges along these triangles. All other edges are not reinforced by other nows, making them “disappear” via CH.

Further, why did traditional physics come up with CDT? What problem does it solve for them. In this model, space and locality can’t exist at all without a similar node-based space. There is no substitute. It would be unusual for someone to come up with something that’s a solution to a non-problem.

The generative function which creates past and future from each node is consistent with changes/movement propagating through edges at c in a block universe. Whether moving diagonally (through space) or vertically (through time) a change can only move one edge at a time. “Starting” from a single node and generating an entire universe with all history by deploying an infinite number of nodes which each deploy an infinite number of nodes. However, a node a billion years in the future following consistent rules will also generate this node now and mostly via edges leading from this node through other nodes. If the argument about states and not their transitions allowing consciousness holds, each cluster of nodes representing an observer moment would observe a universe moment corresponding to whatever reached it at c, creating a warped “plane” of simultaneity which “always” flows through all moments and can differ for observers, as shown in relativity, but which preserves the same separation of spacetime points for all observers.

6.10.7. Speed of light

The speed of light is the speed of all objects through spacetime. It’s also the maximum “angle” of nodes when stepping forward or backward through time along edges. It should have some relation to the current size of the universe as the more future and past nodes that can generate the current node, the more the current node exists. A photon follows a geodesic across spacetime. If the universe is homogeneous and isotropic (all points & directions look the same) the spatial component of the photon’s energy-momentum will be inversely proportional to the size of the universe. That’s cosmological redshift: λ~R.

It’s also a measure of the maximum number of nodes a node can connect to. A greater number of connections means a higher speed of light and vice versa. Depending on how many each connect to, relaxing the node network will result in different overall shapes. Relaxing a network with very high one-to-many edges will produce a network with very steep angles between nodes when stepping forward in the time direction. Many nodes being reachable from one edge corresponds to a high speed of light, as more “distant” nodes are reachable in a single edge.

This also opens a strange can of worms in which the speed of light at this moment in time could be based on the size of the universe at the end of time or how long it lasts while still creating a past which appears to have the current speed of light. But the speed of light tomorrow could be different because it generates a different consistent past.

6.11. Doomsday Paradox

Imagine a ball pit of unknown depth. In it are n number of balls that are numbered sequentially, 1 to n, and randomly distributed. You don’t know n. It could be 4 or 4 quadrillion. Grab one. You pulled number 2, Is it more likely the pit is a few inches deep and only has 4 balls or a mile deep with 4 quadrillion balls? The numbers say it’s much more likely there are only 4. Not guaranteed at all, but more likely. You’re able to extract probability information about the depth of this chasm from a single event. That ability to know is the nature of any universe where probability exists, or so we think.

6.11.1. From your birth order, you can guess how long humanity will last

You are human number 100 billion or so. Did you most likely draw that birth order number from 1 trillion or 100 quadrillion? It’s much more likely you drew it from a lower pool. You’re able to extract probability information about how many humans will ever live. You shouldn’t be able to look forward into the universe like that–not the universe as we understand it. Future knowledge, even probabilistic knowledge like that, is supposed to be inaccessible. This is called the Doomsday paradox.

Another example using when you joined a web site and how many posts there are at that time

6.11.2. If you can figure out how many people will ever live, you’re not in a physical reality

This paradox completely unravels the traditional conception of spacetime. Traditional spacetime involves a far-distant past starting point which sweeps forward, generating moments which then become the past, up until now. The future is not yet generated in this model. It does not exist. The Doomsday paradox give you access to information as if you were on a progress bar and the future was almost as knowable as the past. It places you at the top of a normal distribution curve, with the past used to tell you the approximate width and height of the curve, looking directly into the future as if it’s as real as the past. This is unbelievably strange, and should not at all be possible in any way if spacetime is what we imagine.

6.11.3. You can only determine your position if the universe starts from now and expands into past and future

However, it is completely reasonable to calculate that you are closer to the middle of existence than the edge of it in a mathematical universe which originates from now. Now generates past and future simultaneously. It is reasonable to assume that if you’re starting from the middle and generate in both directions, you’re going to be about in the middle of existence. You would expect, probabilistically, decay in both the forward and backward direction if now is what first exists and generates past and future “later.”

6.11.4. Anywhere you can consider the Doomsday paradox, you cannot be in a physical universe

The Doomsday paradox isn’t an artifact of anything specific to our universe. It occurs whenever probability exists. As such, it is actually evidence of a now-tiled mathematical reality. Just as all observer moments in hypothetical “physical” universes are lifted to mathematical ones via the continuum hypothesis, they are also lifted to now-tiled mathematical universes by the existence of probability. If you can consider the Doomsday paradox, you are not in a physical reality, because probability exists, and as soon as probability exists (the possibility of more than one outcome), your observer moments are lifted to a mathematical universe.

6.11.5. Probability is the substance of the universes

Rather than space or time, it is probability that is “real” in that the universe seems to keep count of all possible worlds and places observer moments according to a probability distribution. That we can think about probability, in other words, the possibility that other things can happen, which eliminates physicality and pulls back the curtain of our amathematial Platonist universe in the same way that finding you possess a gun which can fire an unlimited number of bullets reveals you are not in a real world but in a movie.

6.11.6. Axiom of Choice

The axiom of choice corresponds with observer moments and probability. It does not derive from other rules, but is instead the only “essence” of reality and must be added on to set theory so that mathematics may reflect our observed reality. In that all possible TNT strings interact in all ways, AC is some extant element of reality which can pair off and eliminate possible universes so that our observer moments land in probable ones, much like and in concert with the continuum hypothesis.

It would be expected that proofs not requiring AC differ in some interesting way. A block universe represented as a single wave function should be describable as manifolds represented by proofs without AC, while wave function collapse and proofs related to that underlying mathematics should require AC.

7. Problems

This theory suffers from uncountable bizarre problems as it tries to derive our current, consistent, historied observation with the conclusion that time, space, and matter are nonexistent and not only do all mathematical universes exist, but also all amathematical universes and random chunks of math “floating” in an eternal, timeless void. There are things it must explain and things it does not need to explain.

7.1. A brain is a complex way to generate its underlying state

It’s hard to defend that an entire brain was generated as an observer moment and then generated all space and time around it and forward and backward. It’s trillions of atoms per neuron, which only encodes a few bytes of actual state. The state or experience should be generated first, followed by the minimal substrate (organic brain is not minimal) for it to exist in which also has the maximum time forward and reverse nodes to generate itself (13 billion years in reverse, how many forward?). The more complex it is, the longer it takes to evolve, the bigger history it gets and more likely it exists?

7.2. Youngness problem

Need to review what this actually is again. Maybe it’ll be solved by us hopping to a universe that is more complex in one way or has a longer or shorter history.

7.2.1. Reconcile QM and GR

If the reconciliation is QM is universe hopping and GR is our observer-generated nonexistent now-line traversing a spacetime block, QM should be immune from relativistic effects. Depending on what parts of QM are universe hopping and what aren’t, some QM effects should be impossible to predict. Or not. Hopping has a probabilistic outcome.

7.2.2. Synchronization of the now-line

If time is going to not exist, the now-line might need synchronization of itself across space. Is the now-line a tracing of what had time to reach me? That seems too simple.

The now-line is also independent of the generation of the spacetime block. Whatever rules generate it adhere to generating rules, but the now-line imaginary traversal moves at exactly c at all points through the block, which should be independent of generation. Separating out these two is going to be hard. The laws that generate space from a node are not our laws of physics, since those are time-bound.

7.3. Must Explain

7.4. Need Not Explain

8. Predictions

Block spacetime generated from the big bang with a “now” plane flowing through so each tangent line moves at c should warp significantly at some points. The warping should be enough to skew some numbers about objects flying toward each other. A now-generated block should skew differently, as I’d expect now would be a flat plane and instead space skewed, perhaps in some measurable way to differentiate the two. I have no idea what “real” time would do, as that concept has been gone from me for over a decade.

To maximize the existence of each node/moment, the more nodes in its light cone, the more that node exists as it occurs more often. We should be in a universe with an infinite past and future. Evidence of this universe lasting forever through some means would be good circumstantial evidence.

9. Summary

  1. Consciousness can be decomposed into static states.
  2. Observer moments are generated by the existence of these states, not the transition between them.
  3. A being experiences all moments of its lifetime continuously simply by having its states “written down.” Time need not exist.
  4. Why write things down? The numbers exist regardless. Space need not exist.
  5. Zeno’s paradox is dodged by the absence of motion/unimportance of state transitions.
  6. The “ensemble” is too restrictive. Non-mathematical and inconsistent mathematical structures exist, too.
  7. For each physical universe, there’s infinite mathematical universes, preventing observer moments from existing in a physical universe.
  8. The continuum hypothesis means 0, not a fractional chance of landing in a lower cardinality mathematical or physical universe.
  9. In the same way physical universes can’t contain observer moments, neither can simulated ones.
  10. Boltzmann brains would generate histories and futures which generated themselves, making them no longer Boltzmann but like us.
  11. Observer-moments provide a “tiling context” generating a past and future which each in turn generate that moment, if consistent.
  12. This is why the universe appears consistent, many more consistent universes are generated than inconsistent by every node in our light cone.
  13. TSVF supports the generation of now from the past and present for each moment.
  14. Doomsday paradox only makes sense in a mathematical universe generated from now, not a physical one with only a history.
  15. Whenever we can consider it, we cannot be in a real physical universe. It’s a smoking gun.

10. Implications

Doomsday paradox

Gleaning information about the future or even past from now should not be possible outside of a multiverse, implying probability is the real substance of the universe.

Continuum hypothesis

Unusual separation of cardinalities required for observer moments to maintain separation from their complements in higher or lower cardinalities.

Feynman diagram rotations

Circumstantial evidence of an eternalist block universe in which time is not a particular spacetime direction.

Zeno’s paradox

Evidence against “flow” of time.

Navier-Stokes unsolvability

Remote possibility of infinite undulating fields coalescing into fractal structures as subatomic particles. Highly speculative.

List of paradoxes to go through

11. Questions

Why are laws invariant under time reversal?

Directional, reversible time allows past and future observer moments to generate each other, compounding these moments and pushing them to higher cardinalities, placing our consciousness in historied, consistent universes.

Why is math unreasonably effective at describing the universe?

Just as we tuned the axioms of set theory to produce and solve interesting problems, the mathematical universe we inhabit was built on rules that create interesting, lasting, non-consumptive structures.

Why are mass and energy conserved?

As all possible universes exist, if there were a way to bypass conservation of mass and have sentience in the same universe, that sentience would eventually craft something that could create a runaway filling or emptying of their entire light cone, removing our observer moments.

Why do laws of physics and constants seem consistent throughout space and time?

The node network making up our spacetime block undergoes “instantaneous” relaxation to “find” consistent particle masses throughout. Any region or node which can (and would, as all interactions and values “exist”) have a different mass or interact in a different way would not reinforce the existing network and also not be reinforced by all others, resulting in a lower cardinality, or an equal one which generated a different spacetime block where we are not.

Why are things truly random at the quantum level?

Without true randomness, conservation of mass could be violated leading to the situation described above.

Why are there no aliens visible?

To maximize the number of observer moments and sentient beings, c keeps them distant and places black holes nearby for them to fall toward, maximizing their lifespan. Or, the absence of ETI gives us information that civilizations interacting destroys a lot of observer moments, so the ones we inhabit we appear to be isolated.

Why the universe prevents too much mass by surface area and not volume?

Why does the continuum hypothesis exist outside of axiomatic set theory?

I gotta learn more to answer this one, but we need it to keep fractional moments or the whole thing will be a mess.

What’s going on at the quantum level? Are we ramming the fundamental structures of logic into each other at supercolliders?

The ground state energies of the electron and other fields are “noise floors” separating real from virtual particles.

What is doing the calculating?

The AdS/CFT correspondence of a mathematical universe is that all nested logical structures correspond 1:1 to a flat TNT string. If all true and false TNT strings exist and the above interpretation describes how we end up inside “interesting” nested structures that generate themselves, “calculating” is not something the universe does at all, but a way to describe the interactions that appear to remain after pruning lower cardinality TNTs.

Warning: consistent history is a reserved QM word.

12. Potential Allies

13. Email

contact @ this domain

(QM many worlds is all possible node networks that can be spliced into a region. double slit has interference pattern because just as all possible interactions happen in QED, all possible node networks happen in a space. observer narrows down how many exist as a solution (superdeterminism) Koide formula)