infinity

Factorials vs. Power Sets

Wanted to jot this down as food for thought before I forgot. And so I did.

So we have factorials, denoted with a %!% suffix, e.g. %4! = 1 \times 2 \times 3 \times 4 = 24%, or more generally \[n! := \prod_{k=1}^{n} k=1 \cdot 2 \cdot 3 \cdot \ldots \cdot (k-1) \cdot k.\] Among many other things, %k!% represents the number of possible permutations of a set of unique elements, that is, the number of different ways we can order a group of things.

We've also got %2^x%, the "power set" of %x%. If we have a set %\mathbf{S}% containing %|\mathbf{S}|% items, %2^{|\mathbf{S}|}% is the total number of unique subsets into which it could be partitioned, including the null set. The notion of a power set plays a significant role in transfinite math.

% \aleph _ {0} % is the "smallest" of the infinities, representing the cardinality of the countable numbers (e.g. the set of integers %\mathbb{Z}%). If we assume the Continuum Hypothesis/Axiom of Choice, the next smallest cardinality is the power set %2^{\aleph _ {0}} = \aleph _ {1}%, which corresponds to the cardinality of the real set %\mathbb{R}%. Under %\mathsf{CH}%, there is believed to be no cardinality between consecutive power sets of aleph numbers.

And there's your background. So, I wondered whether %n!% or %2^n% grows faster; it does not take too much thought to realize it's the the former. While %2^n% is merely growing by a factor of %2% with each index, %n!% grows by an ever-increasing factor. In fact, it follows that even for an arbitrarily large constant %C%, you still end up with %\lim _ {n\rightarrow\infty} n! - C^n = \infty%. (The limit also holds for division: %\lim _ {n\rightarrow\infty} \frac{n!}{2^n} = \infty%.)

But here is where my understanding falters. We've seen that %n!%, in the limit, is infinitely larger than %2^n%; I would think it follows that it is therefore a higher cardinality. But when you look at %2^{\aleph _ {k}}% vs. %\aleph _ {k} !%, some obscure paper I just found (and also Wolfram Alpha) would have me believe they're one and the same, and consequentially both equal to %\aleph _ {k+1}%.

Unfortunately, I can't articulate exactly why this bothers me. If nothing else, it seems counter-intuitive that on the transfinite scale, permutations and subsets are effectively equivalent in some sense.

...but I suddenly I realize I'm being dense. One could make the same mathematical argument for %3^n% as for %n!% insofar as growing faster, and in any case, all of these operations are blatantly bijective with the natural numbers and therefore countable. Aha. Well, if there was anything to any of this, it was that bit about permutations vs. subsets, which seems provocative.

Well, next time, maybe I'll put forth my interpretation of %\e% as a definition of %\mathbb{Z}%. Whether it is the definition, or one of infinitely many differently-shaded definitions encodable in various reals (see %\pi%), well, I'm still mulling over that one...

Empirical evidence for the Axiom of Choice

Empirical evidence for the Axiom of Choice

(Credit to xkcd for the comic, of course.)

Near as I can figure, the go-to framework for mathematics these days is Zermelo-Fraenkel set theory, with or without the Axiom of Choice (denoted as ZF or ZFC, respectively). The Axiom of Choice (AC) was contentious in its day, but the majority of mathematicians now accept it as a valid tool. It is independent of ZF proper, and many useful results require that you either adopt or reject it in your proof, but you're allowed to do either.

The simplified version of AC is that it allows you to take an element from any number of non-empty sets, thereby forming a new set. Where it diverges from ZF is that it allows for rigorous handling of infinite sets, something that is impossible within ZF. Imagine you have an infinite number of buckets, each one containing an infinite number of identical marbles; AC finds it acceptable for you to posit, in the course of a proof, that you somehow select one or more marbles from each of those buckets into a new infinite pile.

Unsurprisingly, this means that AC is equivalent to a number of other famous results, notably the well-ordering principle, which counter-intuitively states that there is a way to totally order any set given the right binary operator, even e.g. %\RR%. Worse yet, it lets you (via the Banach-Tarski paradox) disassemble a tennis ball into five or six pieces and reassemble them into something the size of the sun. In other words, AC is meant to be an abstract mathematical tool rather than a reflection of anything in reality.

Now, I could spend forever and a day discussing this sort of stuff, but let's move along to my thought: what it has to do with the Big Bang.

As you all know, the Big Bang ostensibly kicked off our universe some %13.8% billion years ago. However, some of the details are still kind of hazy, especially the further back you look. Past the chronological blinder of the Planck Epoch—the first %~10^-43% seconds of our universe—there is essentially a giant question mark. All of our physics models and equations break down past that point. Theory says that the big bang, at its moment of inception, was an infinitely dense zero-dimensional point, the mother of all singularities. One moment it's chillin', the next it explodes, and over time becomes our universe.

I don't love this theory, but it's the best fit we've got so far for what we've observed, particularly with red-shifting galaxies and the cosmic microwave background radiation, so we'll run with it for the moment. So where did all of these planets and stars actually come from, originally? There is a long and detailed chronology getting from there to here, but let's concern ourselves with looking at it from an information-theoretic point of view, or rather, 'How did all of this order and structure come out of an infinitesimal point?'

It's unclear, but the leading (though disputed) explanation is inflation, referring to an extremely rapid and sizable burst of expansion in the universe's first moments. There is apparently observational data corroborating this phenomenon, but a lot of the explanation sounds hand-wavy to me, and as though it were made to fit the evidence. The actual large structure of the universe is supposed to have arisen out of scale-invariant quantum fluctuations during this inflation phase, which is a cute notion.

Note, by the way, that entropy was also rapidly increasing during this step. In fact, my gut feeling on the matter is that since entropy is expected to strictly increase until the end of time (maximum entropy), it makes plenty of sense that the Big Bang kernel would have had zero entropy—hell, it's already breaking all the rules. While thermodynamic and information-theoretic entropy are ostensibly two different properties, they're one and the same principle underneath. Unless I'm gravely mistaken, no entropy would translate to complete order, or put another way, absolute uniformity.

If that was indeed the case, its information content may have been nothing more than one infinitely-charged bit (or bits, if you like); and if that were the case, there must have been something between that first nascent moment and the subsequent arrival of complex structure that tore that perfect node of null information asunder. Whether it was indeed quantum fluctuations or some other phenomenon, this is an important point which we will shortly circle back to.

It's far from settled, but a lot of folks in the know believe the universe to be spatially infinite. Our observable universe is currently limited to something just shy of %100% billion light years across; however, necessary but not sufficient for the Big Bang theory is the cosmological principle, which states that we should expect the universe to be overwhelmingly isotropic and homogeneous, particularly on a large scale (%300%+ mil. light years). This is apparently the case, with statistical variance shrinking down to a couple percent or much less, depending on what you're examining.

That last bit is icing on the cake for us. The real victory took place during whatever that moment was where a uniform singularity became perturbed. (Worth noting that if the uniformity thing is true, that mandates that it really was an outside agency that affected it in order to spawn today's superstructure, which of course makes about as little sense as anything else, what with there presumably being no 'outside' from which to act.)

So here's the punchline. If you assume an infinite universe, that means that the energy at one time trapped featureless in that dimensionless point has since been split apart into an infinitude of pieces. "But it's still one universe!" you say. Yes, but I think it's equally valid to recognize that our observable universe is finite, and as such, could be represented by %NN% (if discrete) numbers, or %RR% (if not), or %\aleph_n% if things are crazier than we know. Regardless, it could be described mathematically, as could any other of the infinitely many light cones which probably exist, cozy in their own observable (but creepily similar) universe.

Likewise, we could view each observable universe as a subset of the original Big Bang kernel, since that is from whence everything came. It must be representable as a set of equal or larger cardinality to the power set of all observable universe pockets, and therefore the act of splitting it into these subsets was a physical demonstration of the Axiom of Choice in reality!

I'm not sure what exactly that implies, but I find it awfully spooky. I feel like it either means that some things thought impossible are possible, or that this violation happened when the Big Bang kernel was still in its pre-Planck state; but if that's the case, not only do our physical models break down, our fundamental math would be shown not to apply in that realm either, which means it could have been an anti-logical ineffable maelstrom of quasi-reality which we have no hope of ever reasoning about in a meaningful way.

The reason for relativity

As usual, the main ideas here are wildly speculative, written down as food for thought. I am not claiming to be a cosmetologist or astrologer.

For the purposes of this post, let's suppose that the universe is ultimately discrete. By this, I mean that when you get down to its most fundamental level, there are "pixels" in one way or another, some ultimate underlying property of everything which is atomic and indivisible, digital and not analog. If the uncertainty inherent in quantum mechanics drives the deepest level, then the rest of this may not apply, but it is certainly possible that there are deeper underlying forces yet to be identified, so we don't know yet.

Note that the discreteness of the universe does not preclude space or time from being infinite (although it is arguably an infinity of a lower order). However, I'm about to suggest here that it does link the finite-ness of space and time inextricably: either they are both infinite, or both finite.


Consider a discrete universe with a finite volume, but lasting forever. Any such universe could be reasonably treated as an enormous finite-state machine. No matter how vast it might be, there would be only so many possible configurations it could take on before repeating itself.

If its physics are deterministic—configured such that any given arrangement of "pixels" (or "atoms" or "bits") necessarily defines its successor—it would use only a tiny fraction of all possible states. Even if there is some purely random influence at that level, and it could conceivably reach every single possible state, there would still be a limit to the number of possible states.

Granted, it would be enormous; for example, assuming a universe comprising %10^100% bits, there would be %2^(10^100)% possible configurations for it to be in at any given moment. Note that this is a big fucking number; we're talking %~30000...0% possible states, where the %...% is replaced by about %1,000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,% %000,000,000,% %000,000,000,% %000,000,000,%%000,000,000,%%000,000,000% digits.

If we consider its progression over time, we're looking at all the permutations of those configurations, representing an upper bound on every possible narrative the universe could follow, of which there would be %2^(10^100) !% configurations. The number above, which we could not write because it has more digits than the number of electrons in the observable universe, is the number of digits of this new, incomprehensibly larger number. (Yet still finite, so overall, pretty small as numbers go.)

But I digress. So let's focus on the deterministic case, which seems likely to be the correct one in a discrete universe. If we have a finite number of bits, that means that sooner or later, we're bound to come back to an exact state that already occurred in the past. That point necessarily defines a set of states through which the universe will cycle, over and over, forever.

Each cycle will only take a finite amount of time. This means that time cannot truly be termed infinite, since there would be absolutely no reference point by which a state occurring in one cycle could be distinguished from the same state in the next cycle. Time would be a loop of finite length. Thus: in a deterministic universe, finite space implies finite time.


What was less clear to me is whether the converse follows, that finite time implies finite space. The symmetry of space and time biases me strongly towards thinking this is probably the case, but let's look at it.

In the finite-space situation above, I claim that time is effectively finite because there are a limited number of distinct states, rendering it meaningless to speak of any time beyond that necessary to differentiate each of them. In a finite-time universe containing infinite space, we might be tempted to look for the same general pattern; a finite cycle such that regardless of distance traveled (rather than time elapsed), there are a limited number of possibilities one can end up with.

Cue relativity.

Let's consider our infinite-space universe as an infinite number of bits spiraling out from a point-source observer (you). Pretend there is no speed of light limiting things in this universe. Even with a finite lifespan, the universe would still very much be spatially infinite, both in the sense of being able to observe every single bit in that infinite string, and secondarily the possibility of traveling an arbitrary distance within it so that you are now surrounded by an arbitrarily large amount of completely different information than wherever you started. This is clearly not analogous to the finite-space situation above.

How might we fix that asymmetry between time and space if we were designing a universe and wanted to keep things simple and balanced? Mix in relativity. With the speed of light governing things, any observer is now limited to a light cone demarcating a strict upper bound on the observable universe; this is equivalent to establishing a finite limit to the number of bits and consequently number of possible states perceivable for a finite interval of time.

In that sense, the invariant nature of the speed of light seems almost specifically tailored for the purpose of linking space and time together. All of the fun relativistic side effects you end up with are logical necessities conspiring to limit the information available starting at any point in space over a given period of time. Thus: in a deterministic universe, finite time implies finite space—so long as you have relativity!

Finally, the logical corollary to these assertions, taken together, is that infinite space implies infinite time and vice versa.

Conclusion: Maybe that's why relativity.