TC's Picture

TC

Interested in all things math- and CS-related.

MA, USA 17 posts

On generating primes with periodic continued fractions


Pardon the mess. I wrote this with a different markup tool and it looks like it doesn't want to play nice with Ghost.

I started writing this a few days ago, and have since ironed out some things which I hope to come back and flesh out. I leave the rest of the post as-is for now, half for my Dear Reader, and half as a note to self.

TL;DR: I'm virtually positive I'm right about most of this, and have come up with several different ways to theoretically exploit continued fractions to generate primes of arbitrary size. Unfortunately, and naturally, PCFs are just as deeply nuanced as most prime-connected stuff, meaning same ol' story, different tune.

If it were possible to cheaply generate large PCFs, it would be a very different story, and I haven't completely ruled out the possibility yet. In fact, it would be sufficient merely to develop an algorithm which could spit out the length of the period of the PCF for a given radical, assuming it ran in reasonable time. What I did find in the literature about it suggests that it's an open problem, but there also wasn't very much literature compared to some of your more mainstream areas of number theory.

If for whatever reason this doesn't load or looks messed up, try reading here (or here!)



Factorials vs. Power Sets

Wanted to jot this down as food for thought before I forgot. And so I did.

So we have factorials, denoted with a %!% suffix, e.g. %4! = 1 \times 2 \times 3 \times 4 = 24%, or more generally \[n! := \prod_{k=1}^{n} k=1 \cdot 2 \cdot 3 \cdot \ldots \cdot (k-1) \cdot k.\] Among many other things, %k!% represents the number of possible permutations of a set of unique elements, that is, the number of different ways we can order a group of things.

We've also got %2^x%, the "power set" of %x%. If we have a set %\mathbf{S}% containing %|\mathbf{S}|% items, %2^{|\mathbf{S}|}% is the total number of unique subsets into which it could be partitioned, including the null set. The notion of a power set plays a significant role in transfinite math.

% \aleph _ {0} % is the "smallest" of the infinities, representing the cardinality of the countable numbers (e.g. the set of integers %\mathbb{Z}%). If we assume the Continuum Hypothesis/Axiom of Choice, the next smallest cardinality is the power set %2^{\aleph _ {0}} = \aleph _ {1}%, which corresponds to the cardinality of the real set %\mathbb{R}%. Under %\mathsf{CH}%, there is believed to be no cardinality between consecutive power sets of aleph numbers.

And there's your background. So, I wondered whether %n!% or %2^n% grows faster; it does not take too much thought to realize it's the the former. While %2^n% is merely growing by a factor of %2% with each index, %n!% grows by an ever-increasing factor. In fact, it follows that even for an arbitrarily large constant %C%, you still end up with %\lim _ {n\rightarrow\infty} n! - C^n = \infty%. (The limit also holds for division: %\lim _ {n\rightarrow\infty} \frac{n!}{2^n} = \infty%.)

But here is where my understanding falters. We've seen that %n!%, in the limit, is infinitely larger than %2^n%; I would think it follows that it is therefore a higher cardinality. But when you look at %2^{\aleph _ {k}}% vs. %\aleph _ {k} !%, some obscure paper I just found (and also Wolfram Alpha) would have me believe they're one and the same, and consequentially both equal to %\aleph _ {k+1}%.

Unfortunately, I can't articulate exactly why this bothers me. If nothing else, it seems counter-intuitive that on the transfinite scale, permutations and subsets are effectively equivalent in some sense.

...but I suddenly I realize I'm being dense. One could make the same mathematical argument for %3^n% as for %n!% insofar as growing faster, and in any case, all of these operations are blatantly bijective with the natural numbers and therefore countable. Aha. Well, if there was anything to any of this, it was that bit about permutations vs. subsets, which seems provocative.

Well, next time, maybe I'll put forth my interpretation of %\e% as a definition of %\mathbb{Z}%. Whether it is the definition, or one of infinitely many differently-shaded definitions encodable in various reals (see %\pi%), well, I'm still mulling over that one...

Empirical evidence for the Axiom of Choice

Empirical evidence for the Axiom of Choice

(Credit to xkcd for the comic, of course.)

Near as I can figure, the go-to framework for mathematics these days is Zermelo-Fraenkel set theory, with or without the Axiom of Choice (denoted as ZF or ZFC, respectively). The Axiom of Choice (AC) was contentious in its day, but the majority of mathematicians now accept it as a valid tool. It is independent of ZF proper, and many useful results require that you either adopt or reject it in your proof, but you're allowed to do either.

The simplified version of AC is that it allows you to take an element from any number of non-empty sets, thereby forming a new set. Where it diverges from ZF is that it allows for rigorous handling of infinite sets, something that is impossible within ZF. Imagine you have an infinite number of buckets, each one containing an infinite number of identical marbles; AC finds it acceptable for you to posit, in the course of a proof, that you somehow select one or more marbles from each of those buckets into a new infinite pile.

Unsurprisingly, this means that AC is equivalent to a number of other famous results, notably the well-ordering principle, which counter-intuitively states that there is a way to totally order any set given the right binary operator, even e.g. %\RR%. Worse yet, it lets you (via the Banach-Tarski paradox) disassemble a tennis ball into five or six pieces and reassemble them into something the size of the sun. In other words, AC is meant to be an abstract mathematical tool rather than a reflection of anything in reality.

Now, I could spend forever and a day discussing this sort of stuff, but let's move along to my thought: what it has to do with the Big Bang.

As you all know, the Big Bang ostensibly kicked off our universe some %13.8% billion years ago. However, some of the details are still kind of hazy, especially the further back you look. Past the chronological blinder of the Planck Epoch—the first %~10^-43% seconds of our universe—there is essentially a giant question mark. All of our physics models and equations break down past that point. Theory says that the big bang, at its moment of inception, was an infinitely dense zero-dimensional point, the mother of all singularities. One moment it's chillin', the next it explodes, and over time becomes our universe.

I don't love this theory, but it's the best fit we've got so far for what we've observed, particularly with red-shifting galaxies and the cosmic microwave background radiation, so we'll run with it for the moment. So where did all of these planets and stars actually come from, originally? There is a long and detailed chronology getting from there to here, but let's concern ourselves with looking at it from an information-theoretic point of view, or rather, 'How did all of this order and structure come out of an infinitesimal point?'

It's unclear, but the leading (though disputed) explanation is inflation, referring to an extremely rapid and sizable burst of expansion in the universe's first moments. There is apparently observational data corroborating this phenomenon, but a lot of the explanation sounds hand-wavy to me, and as though it were made to fit the evidence. The actual large structure of the universe is supposed to have arisen out of scale-invariant quantum fluctuations during this inflation phase, which is a cute notion.

Note, by the way, that entropy was also rapidly increasing during this step. In fact, my gut feeling on the matter is that since entropy is expected to strictly increase until the end of time (maximum entropy), it makes plenty of sense that the Big Bang kernel would have had zero entropy—hell, it's already breaking all the rules. While thermodynamic and information-theoretic entropy are ostensibly two different properties, they're one and the same principle underneath. Unless I'm gravely mistaken, no entropy would translate to complete order, or put another way, absolute uniformity.

If that was indeed the case, its information content may have been nothing more than one infinitely-charged bit (or bits, if you like); and if that were the case, there must have been something between that first nascent moment and the subsequent arrival of complex structure that tore that perfect node of null information asunder. Whether it was indeed quantum fluctuations or some other phenomenon, this is an important point which we will shortly circle back to.

It's far from settled, but a lot of folks in the know believe the universe to be spatially infinite. Our observable universe is currently limited to something just shy of %100% billion light years across; however, necessary but not sufficient for the Big Bang theory is the cosmological principle, which states that we should expect the universe to be overwhelmingly isotropic and homogeneous, particularly on a large scale (%300%+ mil. light years). This is apparently the case, with statistical variance shrinking down to a couple percent or much less, depending on what you're examining.

That last bit is icing on the cake for us. The real victory took place during whatever that moment was where a uniform singularity became perturbed. (Worth noting that if the uniformity thing is true, that mandates that it really was an outside agency that affected it in order to spawn today's superstructure, which of course makes about as little sense as anything else, what with there presumably being no 'outside' from which to act.)

So here's the punchline. If you assume an infinite universe, that means that the energy at one time trapped featureless in that dimensionless point has since been split apart into an infinitude of pieces. "But it's still one universe!" you say. Yes, but I think it's equally valid to recognize that our observable universe is finite, and as such, could be represented by %NN% (if discrete) numbers, or %RR% (if not), or %\aleph_n% if things are crazier than we know. Regardless, it could be described mathematically, as could any other of the infinitely many light cones which probably exist, cozy in their own observable (but creepily similar) universe.

Likewise, we could view each observable universe as a subset of the original Big Bang kernel, since that is from whence everything came. It must be representable as a set of equal or larger cardinality to the power set of all observable universe pockets, and therefore the act of splitting it into these subsets was a physical demonstration of the Axiom of Choice in reality!

I'm not sure what exactly that implies, but I find it awfully spooky. I feel like it either means that some things thought impossible are possible, or that this violation happened when the Big Bang kernel was still in its pre-Planck state; but if that's the case, not only do our physical models break down, our fundamental math would be shown not to apply in that realm either, which means it could have been an anti-logical ineffable maelstrom of quasi-reality which we have no hope of ever reasoning about in a meaningful way.

Dark integers

(or, "Yet another way to learn more about the underlying structure of the universe.")

Let us assume.

I was reading my article on the distribution of computation space and going back over my results, but this time around, a new potential implication took root in my imagination.

Let's assume for the moment that Turing-completeness is necessary and sufficient for computation of any general sort, and furthermore, that there are no such mechanisms in existence that can compute on a fundamentally higher level. In other words, let us assume that every computing device or mechanism is isomorphic to any other such device or mechanism.

Let's also assume that my experimental analysis of computation space was more or less correct, at least in the broad strokes. If someone else conducted a similar experiment using a substantially different instruction set or methodology, some of the specifics might turn out differently, but I suspect the more fundamental properties I found would still show up.

These aren't trivial assumptions, but they're at the very least plausible. So we proceed.

Scarcity.

Near the end of my article, I speculated that primes would probably be under-represented in a list of numbers generated by random code and sorted by frequency. This was partly an observation, but mostly a logical argument; by their nature, primes eschew association with simple patterns, and simple patterns are the best you're going to get out of a random program generator. I would not expect primes to start showing up in bulk until we reached the level of complexity where a prime sieve might reasonably form, and that's way past the scope of what we were dealing with so far. (Side note: it would be interesting to figure out the shortest possible prime generating program using that sort of basic instruction set.)

What the article doesn't include is my follow-up analysis on the subject, which indeed showed that primes seem to be relatively less frequent than other integers in their neighborhood. Here's a plot relating the set of integers (absolute values were taken) to the number of times each one appeared in the cumulative output of a billion random micro-programs.

Integer Frequency through N=600

There is a lot in this plot that bears discussion, but let's stick to the highlights:

  • It is roughly logarithmic.
  • There are conspicuous spikes (and patterns of spikes), most notably around each %2^n%.
  • The red dots—you may have to click on it and zoom in to see them—are primes.
Conspicuous spikes.

If you haven't figured it out yourself, the spikes on the powers of two are a consequence of the binary assembly code in which these micro-programs are written. It's operationally trivial for computers to multiply or divide by two, so you'll see it a lot in a random distribution like this. And getting to %64% is not much more work than getting to %32% (after all, the infrastructure is already in place.) The same goes for %128%, %256%, and so on, which is why you see these spikes and why they're barely even tapering off.

Red dots.

Closer examination of where these dots fall show that they're almost always in a crevice, or at least on a slope, which is to say that the number in question has a notably smaller chance of being generated than one or both of its neighbors.

However, primes are way too slippery a customer to allow this to always happen. Take, for example, %n=31%. This is the first prime that's sitting up on a mesa. I argued before that primes are hard to generate, so what gives?

Well, for many purposes, the distribution of primes is effectively random. Because of this, some of them are doomed to land right next to a really popular neighbor, as with our poor %31%. As discussed, %32% is especially easy to generate, and as it happens, adding or subtracting %1% is also generally a piece of cake. It's just bad luck for %31% that his neighbor attracts all this traffic for him.

Degrees of introversion.

The example of %31% is particularly egregious, since those %2^n% spikes are the biggest bullies on the playground. But all primes are affected by this phenomenon to some extent. Since a prime by its nature can't be directly conjured via a loop (read: multiplication, exponentiation, hyper-operations), the only way to get to these numbers (short of e.g. prime sieves) is to land on some composite number close to it, and then happen to add or subtract the right constant. The frequency of any prime is ultimately a reflection of its neighborhood.

There are lots of consequences to this arrangement. Turning it around, one good example is the case of twin primes, which are primes that differ only by %2%. You only get twin primes when the number between them is divisible by %6%. Any multiple of %6%, being divisible by %2%, %3%, and %6%, is right up there as an easy target for random generation. The upshot here is that when you have a twin prime pair, you will almost always see a single-value spike between them, since they're surrounding such a juicy target.

But enough fun with plots.

Dark integers.

This is a term I pulled out of my ass which I like so much that I refuse to Google it and find out it's already taken.

We've shown how certain integers are "popular" in terms of computational space frequency, like the powers of two; we have also shown that some tend in the other direction, such as the primes. A dark integer, according to me, is loosely defined as an integer that's especially hard to generate through an algorithm of any kind. I am hampered by the fact that I don't have a working theory as to how to strictly identify these numbers other than by brute force, but I have some idea of their properties.

They are not necessarily prime—in fact, they seemingly prefer to be complicated composites with many disparate factors, often including one or more large primes as a factor. It does make some sense that this would be the ideal setup for a dark integer. The erratic and informationally-dense set of factors it comprises makes it an unpopular number already, and not being prime allows it to be bordered by or nearby to one or more primes, which will not attract any spillover traffic.

'Darkness' in this sense is a relative term, at least for the moment. Perhaps it will make sense to define a dark number as any integer %N% that has a strictly larger Kolmogorov complexity than any %n < N%, although that's still difficult to prove. At any rate, some numbers are darker than others, and while it should roughly correspond to which numbers are lowest on a plot such as the one above, we have to remember that this is an experimentally-derived data set and prone to noise, especially as frequency decreases.

That said, I would like to tip my hat to %3465%, which was the lowest number that did not appear a single time in my billion programs. Wolfram Alpha has this to say about %3465%:

  • %3465 = 3^2 \times 5 \times 7 \times 11%.
  • %3465% has the representation %3465=2^7 3^3+9%.
  • %3465% divides %34^6-1%.

Whether any of that is significant is too soon to say.

Time to get crazy.

So, do you remember what we're assuming about computers? Basically, that a computer is a computer by any other name? It comes into play now.

  • Assumption: The universe is not magic. It plays by the same Turing-equivalent rules as everything in it. This means physics is ultimately deterministic, and if we had infinite time and infinite memory, we could simulate our universe on a Turing machine.
  • Assumption: The Big Bang happened. The initial configuration of energy at its inception was effectively random.

The universe seemingly operates by following physical laws, with all matter and energy having been kicked off by that hot mess of a Big Bang. This strikes me as loosely analogous to my Skynet micro-program generator, what with random initial conditions carrying out clearly defined step by step processes. More to the point, our earlier assumption posits that they are mathematically interchangeable for our purposes.

Hypothesis: For any given countable set of anything in the universe, the quantity of that thing is relatively unlikely to be a dark integer, all things being equal.

To test the hypothesis:

  1. Identify dark integers. The more, the larger, the merrier.
  2. Identify anything that can be quantified.
  3. Quantify the anything.
  4. Repeat to within error bounds for confirmation or disproof.

There may, unfortunately, be some bounds on what the anything could actually be. Ideally, it would be as simple as catching a bunch of atoms of something over and over and tracking the result, but more likely it would have to be something more directly related to the Big Bang, such as star counts in galaxies.

If all of this were sound, conclusions could be drawn.

If there are fewer dark numbers after all, it may be further evidence of the truly random and arbitrary nature of creation, and more importantly, it may be strong evidence of a deterministic universe and all that that implies.

If there is no discrepancy when it comes to dark numbers, it may imply that there is a true stochastic or otherwise bizarre layer to physics, or it may imply that the Big Bang was ordered more deliberately than one would think, or that there is some agency at work that is disrupting the "natural" order of things in one way or another.

Either way, we learn.


Addendum

The dark integers on the graph above are specific to the instruction set used to generate it. While there is reason to expect that most sets of operations will be similar inasmuch as a binary system seems to be the simplest approach, it is not a guarantee. One can imagine pathological instruction sets that will yield an entirely different contour, such as one that uses the primes as its base units.

Long story short, if the binary analysis doesn't turn up results, it might be worth investigating the possibility of reverse engineering an instruction set by identifying its specific dark integers and trying to tease out what sort of atomic operations would give rise to those holes.

While applying any of this to cosmology is wildly speculative, biology and the mechanism of evolution seem like a perfect candidate for testing the viability of dark integers in general. Evolution is undeniably a finite computational force that lends itself well to the whole idea. Multi-cellular organisms are literally built from code, and it stands to reason that on balance, evolutionary pressures will tend to restrict most quantifiable biological features to easier-to-compute amounts. More on this as I investigate.

Rambling brain-dump

  • What is the resolution to the impossibility of heat death, since that implies
    • 0 entropy? (probably just a confusion of terms)
    • either
      • physics is not time-reversible, or
      • something would come from nothing when running from heat death in reverse.
  • Some physicist (Phil Torres) referenced in Wikipedia made the claim that if we start running simulations of our own (people-type-simulations), then odds are we're at the bottom of a very large stack of simulations on simulations by entities higher up the chain. He argues that this poses an existential risk, since if any one of them above us gets shut down, we go with it. I suspect if that arrangement were the case, it would probably be an infinite number above us, and since we haven't been shut down, it would be impossible for that to happen, and since that is unlikely, we are probably not in a simulation that is capable of external termination.
    • An unpleasant counter-possibility to this is that many (most?) of identical simulations in parallel are indeed shut down due to whatever external agency, but the anthropic principle dictates that we cannot be aware of that.
  • I take as my only axiom in cosmology that something cannot come from nothing. Therefore, there was "never" nothing. Rigel argues this is meaningless; although we agreed that there either could have been something (as is the case), or nothing, discussing the nature of "nothing" is at best meaningless, at worst impossible. Also, I argue that since there is something, and given that something can neither arise from or decay to nothing, there has "always" been something and will "always" be something.
  • Let us assume a quantized universe.
    • Given my arguments on by.tc, finite bound on space implies finite time, since there are at most %2^n% informationally unique and discernable states the a bounded quantized space can represent. What makes a "different" loop different in any way if we're lacking any sort of external referent?
      • This fits well with the equivalency of space and time.
      • For the same reason, infinite space implies (the potential for) infinite time--I believe this holds even if there is only a finite amount of matter/energy/information occupying it.
      • The Big Bang could not be the beginning of time (especially true if time and space are as interchangeable as Rigel suggests.). If it were, then even if it were spatially infinite, there would be a fi… something!
        • Distracted by the thought that this implies that the age of the observable universe is only the age in our light cone, and that in all likelihood, it is a continuous event that has been ongoing always and will continue forever, and not at all uniform over its presumed infinite space.
  • The relatively few number of rules that govern the operation of the universe:
    • I believe the probability of our simulation by a sentient species or equivalent is negligible due to, among other things, the argument above pointing out that we're still around.
    • But furthermore, a non-biased computationally-distributed multiverse would favor simplicity. See research on by.tc into extremely low rate of complexity arising in random data.
    • The Anthropic Principle is true inasmuch as we are here, thus we must be in a universe that supports 'intelligence'. Paired with the above, a computational universe approach would suggest that we occupy virtually as simple a universe as possible that is still compatible with the formation of beings of our observational reference class.
    • This also explains the finely-tuned cosmological constants. Bottom line: take the information content inherent in those constants combined with a plausible estimate for equations or algorithms governing our physical law, and I suspect you will find something on the order of, I don't know, a kilobyte, give or take a few orders of magnitude. This fits perfectly with a computation-derived multiverse.
  • Why I think the bottom layer of reality is going to be essentially identical to a bit, even if it is a particle of some kind:
    • Finite numbers are unnatural. The universe sticks to 1, 0 and %\infty%, and of those three, %\infty% is essentially just a derived identity. The probability of anything in the universe having a quantity other than one of those is vanishingly small.