cosmology

Empirical evidence for the Axiom of Choice

Empirical evidence for the Axiom of Choice

(Credit to xkcd for the comic, of course.)

Near as I can figure, the go-to framework for mathematics these days is Zermelo-Fraenkel set theory, with or without the Axiom of Choice (denoted as ZF or ZFC, respectively). The Axiom of Choice (AC) was contentious in its day, but the majority of mathematicians now accept it as a valid tool. It is independent of ZF proper, and many useful results require that you either adopt or reject it in your proof, but you're allowed to do either.

The simplified version of AC is that it allows you to take an element from any number of non-empty sets, thereby forming a new set. Where it diverges from ZF is that it allows for rigorous handling of infinite sets, something that is impossible within ZF. Imagine you have an infinite number of buckets, each one containing an infinite number of identical marbles; AC finds it acceptable for you to posit, in the course of a proof, that you somehow select one or more marbles from each of those buckets into a new infinite pile.

Unsurprisingly, this means that AC is equivalent to a number of other famous results, notably the well-ordering principle, which counter-intuitively states that there is a way to totally order any set given the right binary operator, even e.g. %\RR%. Worse yet, it lets you (via the Banach-Tarski paradox) disassemble a tennis ball into five or six pieces and reassemble them into something the size of the sun. In other words, AC is meant to be an abstract mathematical tool rather than a reflection of anything in reality.

Now, I could spend forever and a day discussing this sort of stuff, but let's move along to my thought: what it has to do with the Big Bang.

As you all know, the Big Bang ostensibly kicked off our universe some %13.8% billion years ago. However, some of the details are still kind of hazy, especially the further back you look. Past the chronological blinder of the Planck Epoch—the first %~10^-43% seconds of our universe—there is essentially a giant question mark. All of our physics models and equations break down past that point. Theory says that the big bang, at its moment of inception, was an infinitely dense zero-dimensional point, the mother of all singularities. One moment it's chillin', the next it explodes, and over time becomes our universe.

I don't love this theory, but it's the best fit we've got so far for what we've observed, particularly with red-shifting galaxies and the cosmic microwave background radiation, so we'll run with it for the moment. So where did all of these planets and stars actually come from, originally? There is a long and detailed chronology getting from there to here, but let's concern ourselves with looking at it from an information-theoretic point of view, or rather, 'How did all of this order and structure come out of an infinitesimal point?'

It's unclear, but the leading (though disputed) explanation is inflation, referring to an extremely rapid and sizable burst of expansion in the universe's first moments. There is apparently observational data corroborating this phenomenon, but a lot of the explanation sounds hand-wavy to me, and as though it were made to fit the evidence. The actual large structure of the universe is supposed to have arisen out of scale-invariant quantum fluctuations during this inflation phase, which is a cute notion.

Note, by the way, that entropy was also rapidly increasing during this step. In fact, my gut feeling on the matter is that since entropy is expected to strictly increase until the end of time (maximum entropy), it makes plenty of sense that the Big Bang kernel would have had zero entropy—hell, it's already breaking all the rules. While thermodynamic and information-theoretic entropy are ostensibly two different properties, they're one and the same principle underneath. Unless I'm gravely mistaken, no entropy would translate to complete order, or put another way, absolute uniformity.

If that was indeed the case, its information content may have been nothing more than one infinitely-charged bit (or bits, if you like); and if that were the case, there must have been something between that first nascent moment and the subsequent arrival of complex structure that tore that perfect node of null information asunder. Whether it was indeed quantum fluctuations or some other phenomenon, this is an important point which we will shortly circle back to.

It's far from settled, but a lot of folks in the know believe the universe to be spatially infinite. Our observable universe is currently limited to something just shy of %100% billion light years across; however, necessary but not sufficient for the Big Bang theory is the cosmological principle, which states that we should expect the universe to be overwhelmingly isotropic and homogeneous, particularly on a large scale (%300%+ mil. light years). This is apparently the case, with statistical variance shrinking down to a couple percent or much less, depending on what you're examining.

That last bit is icing on the cake for us. The real victory took place during whatever that moment was where a uniform singularity became perturbed. (Worth noting that if the uniformity thing is true, that mandates that it really was an outside agency that affected it in order to spawn today's superstructure, which of course makes about as little sense as anything else, what with there presumably being no 'outside' from which to act.)

So here's the punchline. If you assume an infinite universe, that means that the energy at one time trapped featureless in that dimensionless point has since been split apart into an infinitude of pieces. "But it's still one universe!" you say. Yes, but I think it's equally valid to recognize that our observable universe is finite, and as such, could be represented by %NN% (if discrete) numbers, or %RR% (if not), or %\aleph_n% if things are crazier than we know. Regardless, it could be described mathematically, as could any other of the infinitely many light cones which probably exist, cozy in their own observable (but creepily similar) universe.

Likewise, we could view each observable universe as a subset of the original Big Bang kernel, since that is from whence everything came. It must be representable as a set of equal or larger cardinality to the power set of all observable universe pockets, and therefore the act of splitting it into these subsets was a physical demonstration of the Axiom of Choice in reality!

I'm not sure what exactly that implies, but I find it awfully spooky. I feel like it either means that some things thought impossible are possible, or that this violation happened when the Big Bang kernel was still in its pre-Planck state; but if that's the case, not only do our physical models break down, our fundamental math would be shown not to apply in that realm either, which means it could have been an anti-logical ineffable maelstrom of quasi-reality which we have no hope of ever reasoning about in a meaningful way.

Dark integers

(or, "Yet another way to learn more about the underlying structure of the universe.")

Let us assume.

I was reading my article on the distribution of computation space and going back over my results, but this time around, a new potential implication took root in my imagination.

Let's assume for the moment that Turing-completeness is necessary and sufficient for computation of any general sort, and furthermore, that there are no such mechanisms in existence that can compute on a fundamentally higher level. In other words, let us assume that every computing device or mechanism is isomorphic to any other such device or mechanism.

Let's also assume that my experimental analysis of computation space was more or less correct, at least in the broad strokes. If someone else conducted a similar experiment using a substantially different instruction set or methodology, some of the specifics might turn out differently, but I suspect the more fundamental properties I found would still show up.

These aren't trivial assumptions, but they're at the very least plausible. So we proceed.

Scarcity.

Near the end of my article, I speculated that primes would probably be under-represented in a list of numbers generated by random code and sorted by frequency. This was partly an observation, but mostly a logical argument; by their nature, primes eschew association with simple patterns, and simple patterns are the best you're going to get out of a random program generator. I would not expect primes to start showing up in bulk until we reached the level of complexity where a prime sieve might reasonably form, and that's way past the scope of what we were dealing with so far. (Side note: it would be interesting to figure out the shortest possible prime generating program using that sort of basic instruction set.)

What the article doesn't include is my follow-up analysis on the subject, which indeed showed that primes seem to be relatively less frequent than other integers in their neighborhood. Here's a plot relating the set of integers (absolute values were taken) to the number of times each one appeared in the cumulative output of a billion random micro-programs.

Integer Frequency through N=600

There is a lot in this plot that bears discussion, but let's stick to the highlights:

  • It is roughly logarithmic.
  • There are conspicuous spikes (and patterns of spikes), most notably around each %2^n%.
  • The red dots—you may have to click on it and zoom in to see them—are primes.
Conspicuous spikes.

If you haven't figured it out yourself, the spikes on the powers of two are a consequence of the binary assembly code in which these micro-programs are written. It's operationally trivial for computers to multiply or divide by two, so you'll see it a lot in a random distribution like this. And getting to %64% is not much more work than getting to %32% (after all, the infrastructure is already in place.) The same goes for %128%, %256%, and so on, which is why you see these spikes and why they're barely even tapering off.

Red dots.

Closer examination of where these dots fall show that they're almost always in a crevice, or at least on a slope, which is to say that the number in question has a notably smaller chance of being generated than one or both of its neighbors.

However, primes are way too slippery a customer to allow this to always happen. Take, for example, %n=31%. This is the first prime that's sitting up on a mesa. I argued before that primes are hard to generate, so what gives?

Well, for many purposes, the distribution of primes is effectively random. Because of this, some of them are doomed to land right next to a really popular neighbor, as with our poor %31%. As discussed, %32% is especially easy to generate, and as it happens, adding or subtracting %1% is also generally a piece of cake. It's just bad luck for %31% that his neighbor attracts all this traffic for him.

Degrees of introversion.

The example of %31% is particularly egregious, since those %2^n% spikes are the biggest bullies on the playground. But all primes are affected by this phenomenon to some extent. Since a prime by its nature can't be directly conjured via a loop (read: multiplication, exponentiation, hyper-operations), the only way to get to these numbers (short of e.g. prime sieves) is to land on some composite number close to it, and then happen to add or subtract the right constant. The frequency of any prime is ultimately a reflection of its neighborhood.

There are lots of consequences to this arrangement. Turning it around, one good example is the case of twin primes, which are primes that differ only by %2%. You only get twin primes when the number between them is divisible by %6%. Any multiple of %6%, being divisible by %2%, %3%, and %6%, is right up there as an easy target for random generation. The upshot here is that when you have a twin prime pair, you will almost always see a single-value spike between them, since they're surrounding such a juicy target.

But enough fun with plots.

Dark integers.

This is a term I pulled out of my ass which I like so much that I refuse to Google it and find out it's already taken.

We've shown how certain integers are "popular" in terms of computational space frequency, like the powers of two; we have also shown that some tend in the other direction, such as the primes. A dark integer, according to me, is loosely defined as an integer that's especially hard to generate through an algorithm of any kind. I am hampered by the fact that I don't have a working theory as to how to strictly identify these numbers other than by brute force, but I have some idea of their properties.

They are not necessarily prime—in fact, they seemingly prefer to be complicated composites with many disparate factors, often including one or more large primes as a factor. It does make some sense that this would be the ideal setup for a dark integer. The erratic and informationally-dense set of factors it comprises makes it an unpopular number already, and not being prime allows it to be bordered by or nearby to one or more primes, which will not attract any spillover traffic.

'Darkness' in this sense is a relative term, at least for the moment. Perhaps it will make sense to define a dark number as any integer %N% that has a strictly larger Kolmogorov complexity than any %n < N%, although that's still difficult to prove. At any rate, some numbers are darker than others, and while it should roughly correspond to which numbers are lowest on a plot such as the one above, we have to remember that this is an experimentally-derived data set and prone to noise, especially as frequency decreases.

That said, I would like to tip my hat to %3465%, which was the lowest number that did not appear a single time in my billion programs. Wolfram Alpha has this to say about %3465%:

  • %3465 = 3^2 \times 5 \times 7 \times 11%.
  • %3465% has the representation %3465=2^7 3^3+9%.
  • %3465% divides %34^6-1%.

Whether any of that is significant is too soon to say.

Time to get crazy.

So, do you remember what we're assuming about computers? Basically, that a computer is a computer by any other name? It comes into play now.

  • Assumption: The universe is not magic. It plays by the same Turing-equivalent rules as everything in it. This means physics is ultimately deterministic, and if we had infinite time and infinite memory, we could simulate our universe on a Turing machine.
  • Assumption: The Big Bang happened. The initial configuration of energy at its inception was effectively random.

The universe seemingly operates by following physical laws, with all matter and energy having been kicked off by that hot mess of a Big Bang. This strikes me as loosely analogous to my Skynet micro-program generator, what with random initial conditions carrying out clearly defined step by step processes. More to the point, our earlier assumption posits that they are mathematically interchangeable for our purposes.

Hypothesis: For any given countable set of anything in the universe, the quantity of that thing is relatively unlikely to be a dark integer, all things being equal.

To test the hypothesis:

  1. Identify dark integers. The more, the larger, the merrier.
  2. Identify anything that can be quantified.
  3. Quantify the anything.
  4. Repeat to within error bounds for confirmation or disproof.

There may, unfortunately, be some bounds on what the anything could actually be. Ideally, it would be as simple as catching a bunch of atoms of something over and over and tracking the result, but more likely it would have to be something more directly related to the Big Bang, such as star counts in galaxies.

If all of this were sound, conclusions could be drawn.

If there are fewer dark numbers after all, it may be further evidence of the truly random and arbitrary nature of creation, and more importantly, it may be strong evidence of a deterministic universe and all that that implies.

If there is no discrepancy when it comes to dark numbers, it may imply that there is a true stochastic or otherwise bizarre layer to physics, or it may imply that the Big Bang was ordered more deliberately than one would think, or that there is some agency at work that is disrupting the "natural" order of things in one way or another.

Either way, we learn.


Addendum

The dark integers on the graph above are specific to the instruction set used to generate it. While there is reason to expect that most sets of operations will be similar inasmuch as a binary system seems to be the simplest approach, it is not a guarantee. One can imagine pathological instruction sets that will yield an entirely different contour, such as one that uses the primes as its base units.

Long story short, if the binary analysis doesn't turn up results, it might be worth investigating the possibility of reverse engineering an instruction set by identifying its specific dark integers and trying to tease out what sort of atomic operations would give rise to those holes.

While applying any of this to cosmology is wildly speculative, biology and the mechanism of evolution seem like a perfect candidate for testing the viability of dark integers in general. Evolution is undeniably a finite computational force that lends itself well to the whole idea. Multi-cellular organisms are literally built from code, and it stands to reason that on balance, evolutionary pressures will tend to restrict most quantifiable biological features to easier-to-compute amounts. More on this as I investigate.

No life among the stars

On Facebook, in response to this article that's been circulating today for some inscrutable reason, one A. Melaragni said:

Apparently the guy is just talking about our own galaxy; there are BILLIONS AND BILLIONS of galaxies in the universe. Not to mention that even if other life doesn't exist in our galaxy now, that doesn't mean it never did or never will. There is just so much room out there that I think it would be bizarre if life DIDN'T exist elsewhere.

Which got me thinking. In the absence of proof positive one way or another about the existence of life elsewhere or elsewhen, we can still do some perfectly legitimate Bayesian reasoning based on what we've [not] observed, and how the universe seems to work.

While I don't really buy it, I'll concede there are some legitimate reasons to think we might be the only life anywhere. The only other plausible option is that there is an abundance of life: is, was, and will be.

My reasoning is that physics doesn't do one-offs. You don't see a new kind of particle exactly once and never again, you don't see one wildly unique type of stellar body sitting among trillions of others in our galaxy. In general, there are strong mathematical reasons why if anything happens once, it (or something similar enough) will most likely happen twice. And if there's anything less likely than someone happening just once, it's something happening just twice, which would make a mockery of probability.

So, I figure it's probably not just us out here, and if that's the case, it's certainly not going to be just us and one or two other planets of life. Let's consider the two main possibilities.

Let there be life.

In the scenario where intelligent life happens, and where you accept my assertion that there will probably be a lot of it, we can draw some conclusions. Crucially, we are unlikely to be unusual, within the range of all life that eventually takes form. Barring a meddling God, the various salient characteristics of a lifeform—longevity, intelligence, size, high reliance on optical/EM sensory input, overall temperament—are gonna end up distributed as big fat Gaussians, and we're gonna be right near the big fat middle of them for most of these things. Yes, there will be truly alien and bizarre creatures out on the fringes, so that's fun, but it's not us.

This also applies to our timeline of development of technology relative to others. We are one species selected at random from all those who have or will exist. There are some cosmological reasons why we might be one of the earlier ones, but I feel that's heavily outweighed by the implicit probabilistic evidence under discussion; if there are to be a million different life-bearing planets, what are the odds we're the very first to start to get our shit together?

Then of course you run into your Fermi paradox, which on the whole is pretty ominous. Without getting too sidetracked in that, it strongly suggests that either we'll never make it to the stars, or when we do, it will be in a form or mode unrecognizable by present-day us.

To recap:

  • If there's any life besides us, there's probably a shitload of it.
  • If there's a shitload of it, lots of them probably have a huge head start.
  • For whatever reason, they're all gone, unrecognizable, or (at best) undetectable. Without exception. This implies that whatever the attractor at work might ultimately be, it is likely inevitable and undeniable.

Basically, for anyone who dreams of 50s-scifi-style cruising around the galaxy and meeting aliens, give it up. Either there aren't any out there, or there is some overwhelmingly strong reason not to make contact on those sorts of terms, or there'll be some pesky obstacle like inexorable self-annihilation in the way. If we ever do make it to the tipping point where we have the social and engineering infrastructure for interstellar flight, and start doing it up in earnest, I figure that's the nail in the coffin for there being any other life out there. So, the other scenario:

Let there be no life.

In this scenario we are, somehow, a black swan—most likely, there would still be an infinite number of aliens in our infinite universe, but they'd be so negligibly rare as to essentially guarantee none in our light cone, which means they effectively don't exist. There's not a whole lot to say on this other than to point out the silver lining—that the massive, invisible, all-subsuming agent watching us hungrily from behind the curtain of the Fermi paradox would suddenly become a non-issue.

It may be a lonely existence, but this scenario is one in which there's no especially good reason why we can't go exploring all over observable space and spread like cancer. There just won't be all that much more to do out there, at least until we say "okay, fuck it" and seed some new form of life deliberately. I think that, of the two scenarios, this is actually the better deal, as it sidesteps the many and sundry sinister explanations for the current deafening silence.


Note to self: there was something worth exploring there that I skipped by. Given certain kinds of systems governed by a relatively small set of rules but seeded with some level of random initial conditions or ongoing perturbations, is it in fact true that it can be less likely for something to happen twice than to happen once? And if so, how much must a thing have to happen before the probabilities pull even again? I think there could be depth there. Maybe more weight behind %0%, %1%, and %oo% being the only relevant quantities of things. Which is arguably already well established. And %oo% is just a gussied-up %0%.

But I'll go digress.

The reason for relativity

As usual, the main ideas here are wildly speculative, written down as food for thought. I am not claiming to be a cosmetologist or astrologer.

For the purposes of this post, let's suppose that the universe is ultimately discrete. By this, I mean that when you get down to its most fundamental level, there are "pixels" in one way or another, some ultimate underlying property of everything which is atomic and indivisible, digital and not analog. If the uncertainty inherent in quantum mechanics drives the deepest level, then the rest of this may not apply, but it is certainly possible that there are deeper underlying forces yet to be identified, so we don't know yet.

Note that the discreteness of the universe does not preclude space or time from being infinite (although it is arguably an infinity of a lower order). However, I'm about to suggest here that it does link the finite-ness of space and time inextricably: either they are both infinite, or both finite.


Consider a discrete universe with a finite volume, but lasting forever. Any such universe could be reasonably treated as an enormous finite-state machine. No matter how vast it might be, there would be only so many possible configurations it could take on before repeating itself.

If its physics are deterministic—configured such that any given arrangement of "pixels" (or "atoms" or "bits") necessarily defines its successor—it would use only a tiny fraction of all possible states. Even if there is some purely random influence at that level, and it could conceivably reach every single possible state, there would still be a limit to the number of possible states.

Granted, it would be enormous; for example, assuming a universe comprising %10^100% bits, there would be %2^(10^100)% possible configurations for it to be in at any given moment. Note that this is a big fucking number; we're talking %~30000...0% possible states, where the %...% is replaced by about %1,000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,% %000,000,000,% %000,000,000,% %000,000,000,%%000,000,000,%%000,000,000% digits.

If we consider its progression over time, we're looking at all the permutations of those configurations, representing an upper bound on every possible narrative the universe could follow, of which there would be %2^(10^100) !% configurations. The number above, which we could not write because it has more digits than the number of electrons in the observable universe, is the number of digits of this new, incomprehensibly larger number. (Yet still finite, so overall, pretty small as numbers go.)

But I digress. So let's focus on the deterministic case, which seems likely to be the correct one in a discrete universe. If we have a finite number of bits, that means that sooner or later, we're bound to come back to an exact state that already occurred in the past. That point necessarily defines a set of states through which the universe will cycle, over and over, forever.

Each cycle will only take a finite amount of time. This means that time cannot truly be termed infinite, since there would be absolutely no reference point by which a state occurring in one cycle could be distinguished from the same state in the next cycle. Time would be a loop of finite length. Thus: in a deterministic universe, finite space implies finite time.


What was less clear to me is whether the converse follows, that finite time implies finite space. The symmetry of space and time biases me strongly towards thinking this is probably the case, but let's look at it.

In the finite-space situation above, I claim that time is effectively finite because there are a limited number of distinct states, rendering it meaningless to speak of any time beyond that necessary to differentiate each of them. In a finite-time universe containing infinite space, we might be tempted to look for the same general pattern; a finite cycle such that regardless of distance traveled (rather than time elapsed), there are a limited number of possibilities one can end up with.

Cue relativity.

Let's consider our infinite-space universe as an infinite number of bits spiraling out from a point-source observer (you). Pretend there is no speed of light limiting things in this universe. Even with a finite lifespan, the universe would still very much be spatially infinite, both in the sense of being able to observe every single bit in that infinite string, and secondarily the possibility of traveling an arbitrary distance within it so that you are now surrounded by an arbitrarily large amount of completely different information than wherever you started. This is clearly not analogous to the finite-space situation above.

How might we fix that asymmetry between time and space if we were designing a universe and wanted to keep things simple and balanced? Mix in relativity. With the speed of light governing things, any observer is now limited to a light cone demarcating a strict upper bound on the observable universe; this is equivalent to establishing a finite limit to the number of bits and consequently number of possible states perceivable for a finite interval of time.

In that sense, the invariant nature of the speed of light seems almost specifically tailored for the purpose of linking space and time together. All of the fun relativistic side effects you end up with are logical necessities conspiring to limit the information available starting at any point in space over a given period of time. Thus: in a deterministic universe, finite time implies finite space—so long as you have relativity!

Finally, the logical corollary to these assertions, taken together, is that infinite space implies infinite time and vice versa.

Conclusion: Maybe that's why relativity.

On reality

There's this longstanding observation that we live in a "fine-tuned universe," which is to say that several fundamental physical constants (e.g. strength of gravity, electromagnetism) appear to be set perfectly for us. If they were any different, even by one part in a billion, the universe would have evolved very differently and failed to give rise to sufficient complexity to eventually enable life as we know it. Which, at first glance, seems awfully suspicious.

One proposed explanation for this is the Anthropic Principle, which tries to couch the whole thing as a selection effect. The claim is that obviously the cosmological constants had to be set that way, because if they weren't, nobody would be around to notice, and therefore no meaningful conclusions can be drawn. While the basic premise makes sense, arguing that there's nothing to be inferred strikes me as a load of horseshit.

If there were only one universe (ours), with one set of laws and constants, the probability of it just happening to work out like this is so close to 0 as to be negligible, barring the possibility that it was deliberately engineered that way (which is a whole different discussion.) So, it seems extremely likely to me that this should be taken as strong evidence in support of a multiverse of one form or another, consisting of a vast number (or more likely, an infinite number) of universes with varying laws of physics and initial conditions. This would neatly account for our seemingly extraordinarily unlikely circumstances, and incidentally explains and recolors Occam's Razor not just as a heuristic, but a de facto selection pressure in its own right.

So what?

Well, now let's look at consciousness and experience as computation. If you discount the notion of a soul, or other hand-wavy quantum mechanical effects, all we are is our own brain, fed appropriate input. The brain is presumably Turing-complete, essentially a computer— or more to the point, operates on principles that could be precisely modeled on a computer. You could be running on a PC right now, and if the model and coding were all correct, there'd be no subjective difference for you.

Of course, that's just The Matrix 101. But the problem is, things get weirder. It turns out that a huge variety of systems can be made to be Turing-complete, which is to say capable of carrying out any computation that a PC (or a brain) could execute. Bored CS students have designed Turing machines out of tinkertoys and Minecraft levels— hell, you could even have a hundred Tibetan monks pushing around beads on abacuses, and so long as they're conducting computation that can be isomorphically mapped bijectively to neural wetware, the end result is that you, your whole life and apparent consciousness and free will could be no more than a consequence of those beads getting pushed. Although this sounds a little out there, it all logically follows, and I think a growing consensus is emerging about it among people working in these fields.

So what's the first thing got to do with the second?

If we assume all possible universes exist and the level of algorithmic complexity of each universe is uniformly distributed over that infinite set (which seems like a plausible assumption), then universes dominated by the simplest of laws will be infinitely dense relative to more rarefied and complexity-heavy universes like our own. I expect that many of those universes will be appropriately configured such that they end up running a huge amount of computation, whether through a physical substrate (think an infinitely physically large universe consisting of random quantum fluctuations affecting clumps of molecules that tend to form NAND gates) or directly in some kind of even more abstract mathematical formalism.

If that's true, then every person that does exist or could exist and every life they could live would be inevitably simulated by just a single one of these Turing-verses. Which would mean, in turn, that it is essentially mathematically certain that you're not really here in the sense that you think, but are instead one of those simulations in a completely different and alien universe.

Now the question is how one could falsify this whole hypothesis, or better yet, if it turns out to be true, how one could arrange things in our universe such that we tunnel our way out into some control of the underlying Turing-verse.

And it's also conceivable that carrying out computation itself is not necessary to give rise to our subjective reality. It may be that just the pattern itself, in static form, is sufficient. That would be a beautiful thing, because it would open up the possibility that the only thing that exists in all of creation, across all universes and all time, is a single number encoding all things, in exactly the same way that any phone number, or book, or data representations of entire lives and worlds are contained within the digits of the number pi.