I will try to give only the briefest overview of cellular automata here, as there are already more than enough excellent introductions to the topic on the World Wide Internet.

A cellular automaton is a model of computation wherein a cyclic or unbounded array of individual *cells* each contain an integral value, often constrained to 0 or 1. Every such model is governed by a specific ruleset and is referred to singularly as a *cellular automaton* (CA).

CAs are configured to operate in one or more spatial dimensions. While there is no limit on this, the majority of research is limited to 1-D and 2-D models. Some models extend arbitrarily far as needed along each dimension, although for some purposes, finitely long wrap-around spans of cells are used. There is always a timelike progression which drives the computation: the combination of the specific ruleset and the initial cell values at \( t_0 \) completely determine the behavior of the CA as time progresses.

Like anything else, CAs have areas of strength and areas of weakness, relative to other models of computation. One of their great strengths is an innate capacity for massively parallel calculations, which is appealing as we increasingly face hard physical limits on serial computation speeds, and perhaps more to the point, begins to reveal shared properties with the concept of Newtonian spacetime and other facets of the "real world."

But perhaps the greatest strength of CA models is their underlying simplicity. While they can be made arbitrarily complicated, they rarely are, and more importantly, don't need to be. The simplest species of CA are known as Elementary Cellular Automatons (ECAs), shorthand for the 256 simplest rulesets that aren't all trivial.

Every ECA consists of only one spatial dimension. Each cell takes only two possible values, and bases its behavior solely on its own value and those of its two immediate neighbors. The precise rule for updating that value is what distinguishes the 256 ECAs from one another. Every cell in a standard cellular automaton operates under precisely the same rules as every other cell—it is only the different values of their neighbor cells which allow variation in their behavior.

One of the simplest rules we might have would be to check the value (often called "color") of the cell to your right and then change your own value to match. On the next time step, the cell to your left will do the same thing you just did, and so that value will move a step further; indeed, every starting value will propagate at a constant rate to the left with this rule. When studying ECAs, time is typically depicted as moving down on the y-axis, yielding a 2-D snapshot of what actually takes place over every moment along that one line of cells. The example discussed here would present as a diagonal line crossing space and time in such a plot.

Many rules are along the lines of the straightforward propagation example we just gave, but Rule 170 fits that description most precisely. In this rule diagram, the lower middle cell denotes what the middle cell will become after one time step, based on the values of itself and its neighbors as depicted above it. For these simple CAs, there are only 8 such rules, which together completely define the behavior of the model; this is why there are only \(2 ^ {2 ^ 3} = 256 \) possible rules at this level of simplicity.

To be precise, the top line in each icon shows the given cells \( c_{i-1}, c_{i}, c_{i+1} \) at time \( t_k \), with the cell below showing the resulting value of \( c_i \) at \( t_{k+1} \). As discussed in our example, we see that the new center cell simply copies whatever color was previously on its right side.

This is a fairly boring ruleset and results in a fairly boring spacetime plot—scrolling down, you can find Rule 170 near the bottom right, sporting plenty of diagonal lines.

Note that a comparable rule that merely pushes everything to the right instead of the left is essentially the same thing. In fact, each of the 256 ECAs is identical in spirit to 1 or 3 other rulesets due to left-right mirroring and/or complementary value-inversion rules, leaving only 88 truly unique CAs. These are shown below with the identical rule numbers listed (see Wolfram codes) with the top, bolded number featured in the plot. The plots themselves are of a 128-cell wide cyclical configuration captured for 128 time steps.

Undoubtedly the most famous example of a CA is Mike Conway's Game of Life, a 2-D model. Despite extremely straightforward rules of operation, it yields surprising complexity; it has long since been proved Turing Complete, meaning it is able to carry out any computation that any other TC model can, and is therefore subject to a number of undecidability and incompleteness proofs that have been established by Turing, Godel, Rice, and others over the 20th century.

For example, these results mean that it is impossible, in general, to know if one arbitrary configuration will ever lead to some other specific configuration; in fact, for infinitely many starting configurations, it is impossible to say anything of consequence about their eventual behavior, including whether they will ever stop changing or repeat themselves. But I digress. Excellent overviews of such impossibility proofs and an enormous amount of material are readily available on the Game of Life, so we'll leave that be.

(ignore below)

hi-res shots of Rule 54 structure flowing from a single toggled bit

]]>*As usual, the main ideas here are wildly speculative, written down as food for thought. I am not claiming to be a cosmetologist or astrologer.*

For the purposes of this post, let's suppose that the universe is ultimately discrete. By this, I mean that when you get down to its most fundamental level, there are "pixels" in one way or another, some ultimate underlying property of everything which is atomic and indivisible, digital and not analog. If the uncertainty inherent in quantum mechanics drives the deepest level, then the rest of this may not apply, but it is certainly possible that there are deeper underlying forces yet to be identified, so we don't know yet.

Note that the discreteness of the universe does not preclude space or time from being infinite (although it is arguably an infinity of a lower order). However, I'm about to suggest here that it *does* link the finite-ness of space and time inextricably: either they are both infinite, or both finite.

Consider a discrete universe with a finite volume, but lasting forever. Any such universe could be reasonably treated as an enormous finite-state machine. No matter how vast it might be, there would be only so many possible configurations it could take on before repeating itself.

If its physics are deterministic—configured such that any given arrangement of "pixels" (or "atoms" or "bits") necessarily defines its successor—it would use only a tiny fraction of all possible states. Even if there is some purely random influence at that level, and it could conceivably reach every single possible state, there would still be a limit to the number of possible states.

Granted, it would be enormous; for example, assuming a universe comprising %10^100% bits, there would be %2^(10^100)% possible configurations for it to be in at any given moment. Note that this is a big fucking number; we're talking %~30000...0% possible states, where the %...% is replaced by about %1,000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,%%000,000,000,% %000,000,000,% %000,000,000,% %000,000,000,%%000,000,000,%%000,000,000% *digits*.

If we consider its progression over time, we're looking at all the permutations of those configurations, representing an upper bound on every possible narrative the universe could follow, of which there would be %2^(10^100) !% configurations. The number above, which we could not write because it has more digits than the number of electrons in the observable universe, is the number of *digits* of this new, incomprehensibly larger number. (Yet still finite, so overall, pretty small as numbers go.)

But I digress. So let's focus on the deterministic case, which seems likely to be the correct one in a discrete universe. If we have a finite number of bits, that means that sooner or later, we're bound to come back to an exact state that already occurred in the past. That point necessarily defines a set of states through which the universe will cycle, over and over, forever.

Each cycle will only take a finite amount of time. This means that time cannot truly be termed infinite, since there would be absolutely no reference point by which a state occurring in one cycle could be distinguished from the same state in the next cycle. Time would be a loop of finite length. Thus: *in a deterministic universe, finite space implies finite time*.

What was less clear to me is whether the converse follows, that finite time implies finite space. The symmetry of space and time biases me strongly towards thinking this is probably the case, but let's look at it.

In the finite-space situation above, I claim that time is effectively finite because there are a limited number of distinct states, rendering it meaningless to speak of any time beyond that necessary to differentiate each of them. In a finite-time universe containing infinite space, we might be tempted to look for the same general pattern; a finite cycle such that regardless of distance traveled (rather than time elapsed), there are a limited number of possibilities one can end up with.

Cue relativity.

Let's consider our infinite-space universe as an infinite number of bits spiraling out from a point-source observer (you). Pretend there is no speed of light limiting things in this universe. Even with a finite lifespan, the universe would still very much be spatially infinite, both in the sense of being able to observe every single bit in that infinite string, and secondarily the possibility of traveling an arbitrary distance within it so that you are now surrounded by an arbitrarily large amount of completely different information than wherever you started. This is clearly not analogous to the finite-space situation above.

How might we fix that asymmetry between time and space if we were designing a universe and wanted to keep things simple and balanced? Mix in relativity. With the speed of light governing things, any observer is now limited to a light cone demarcating a strict upper bound on the observable universe; this is equivalent to establishing a finite limit to the number of bits and consequently number of possible states perceivable for a finite interval of time.

In that sense, the invariant nature of the speed of light seems almost specifically tailored for the *purpose* of linking space and time together. All of the fun relativistic side effects you end up with are logical necessities conspiring to limit the information available starting at any point in space over a given period of time. Thus: *in a deterministic universe, finite time implies finite space—so long as you have relativity!*

Finally, the logical corollary to these assertions, taken together, is that infinite space implies infinite time and vice versa.

**Conclusion:** Maybe that's why relativity.