Message Boards Message Boards

Fermi Paradox: Communicating with ET

Posted 7 years ago

Stephen Wolfram's latest blog post is about communicating with extra-terrestrials using a beacon or time capsule, and is worth checking out especially for those interested in language, AI, etc.. Basically one of the biggest nonobvious obstacles is the principle of computational equivalence, and the obvious problem with a lot of ideas is that cultural conventions give meaning to symbols and drawings.

I had been thinking about how Wolfram's A New Kind of Science gives insights into the Fermi paradox, which I think is related to his recent post. Roughly speaking here is the Fermi paradox. The aliens must be visiting us already if one assumes any reasonable positive chance for the requirements of an alien civilization (whatever that means), e.g. requiring a star with planets, planets that are habitable, life developing, civilization arising, technology developing, sending out probes or messages with technology either on purpose or by accident. The messages or travelers could go slowly since the age of the universe is so long, and the probabilities of each required step can be very small because there are so many stars in the galaxy (even ignoring other galaxies).

There are many takes on this paradox (including the proof-by-contradiction claim that the chances of civilization are exactly zero). There is a note in the book discussing the paradox, but I was thinking there could be another NKS-centric explanation (probably similar to what has been said before but would be based on a computational way of thinking).

Consider the space of animal intelligences. We don't even have a NKS model for this, but let's say the space has N rules which are universal. That would mean there are N rules that can effectively model all animal intelligences, which capture the main features allowing some fine details to differ. If two species use the same rule then they can understand each other. If one has a universal rule then it is theoretically possible to understand the others, but usually it is quite hard to do so to the point of being practically unintelligible for any other universal intelligence.

So one answer to the Fermi paradox is that we are surrounded by alien civilizations that are bizarre and unidentifiable as intelligent (and vice versa). The value N of universal rules in the space of naturally arising animal intelligences might be the most important factor, since one might expect from random placement in 3D that between one and ten neighbors would be enough to isolate us from any other civilizations that we could communicate with.

For example, one might be performing rule 54 and the other performing rule 110, and the respective civilizations have no practical way of knowing if the other civilization's artifacts are just natural.

Row[ArrayPlot[CellularAutomaton[#, RandomInteger[1, 300], 800], ImageSize -> 200] & /@ {54, 110}]

Two different ECA

POSTED BY: Todd Rowland
2 Replies

I didn't think of it that way. One universal computer can emulate another one, and you could think of it as an encryption, even if it is not an intentional encryption. There might be some confusion though between the decoding needed by humans from an alien culture (say a lost tribe like in Science Fiction) and the kind of decoding needed by aliens whose intelligence and language is based on a different computational rule (which is much harder).

Your second point is also a good one, because if the NKS hypothesis, that simple rules are what prevail in nature, is correct then we would expect the number of languages (or as Andrew says "computers" in his second paragraph) to be small. This holds out some hope that we could find others using the same type of language and intelligence because the number of simple rules is limited. But theoretically the supply is endless, so if languages and intelligences don't follow the NKS hypothesis, it would be extremely unlikely to have any recognition of alien intelligences, let alone communication.

About the claim that computation tends to stick in the lowest level of universality, that seems to be the case for ECA rule 110 but I don't think it is generally true because some rules prefer randomness or prefer static conditions when left on their own, and in general the condition of universality is independent of thermodynamic principles.

POSTED BY: Todd Rowland
Posted 7 years ago

I think this brings up the idea of a configuration space being encrypted in some way. Since interactions are all that matter -- do the conscious beings observe any encryption? Well, if something can't be interacted with -- then it doesn't exist..at least in the observable sense.

But just talking about computers -- that leaves us with endless, upon endless possible clone machines that all compute the same thing yet have their configuration space sitting in different encrypted configurations at all times.

But more realistically - computation tends to "stick" in the lowest level of universality. In our universe, at low-energy levels that don't blow biology to smithereens, this tends to be chemical reactions. So that's probably why our brains are not the most efficient via the Bekenstein bound or even close to it. But they are rather stable in the face of our environment.

If you create a simulation where computation can happen most easily in terms of the bottom-level reactions, then very efficient computers will most likely sprout. Consider Collision-Based Computing or soliton-like automata. Energy's duality with Mass is effectively mimicked by Stationary Particles duality with Moving Particles. If you define some basic "bosons" that always move at 1 cell/sec, and come up with the right interactions that fuse them. You'll have the necessary type of simulation to see computers form, and hopefully even life.

POSTED BY: Andrew Fuchs
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract