Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Fundamental Physics Project sorted by activeEntropy in the Wolfram model
https://community.wolfram.com/groups/-/m/t/2006331
Is there a way to think of entropy in the spatial hypergraph? Or does it correspond to the fact that branches are branching in the multiway graph? I expect there should be a relation between time passing and entropy rising in the model.
Once we have this, I am wondering if there would then be a way to represent life in the model if it can be defined as the thing in the universe which feeds off negative entropy or free energy as Schrödinger claimed.Brady Doyle2020-06-17T18:03:51ZWhat does the Wolfram Model say about the heat death of the universe?
https://community.wolfram.com/groups/-/m/t/2008340
I'm asking this question to spur a discussion that might lead somewhere interesting.David Barksdale2020-06-19T20:32:25ZA Chronological Approach to the Wolfram Physics Project
https://community.wolfram.com/groups/-/m/t/2054240
The goal of the Wolfram Physics Project is to find a rule that generates our Universe, as it was envisioned in the section [Ultimate Models for the Universe][1] of the chapter *Fundamental Physics* of S. Wolfram's book *A New Kind of Science*. The main obstacle is that the evolution of our Universe may be [computationally irreducible][2]. If this is the case, the emergent properties generated by the rules from the candidates of the Wolfram Model of our Universe can only be obtained by means of extremely slow simulations.
One way to overcome the problem of computational irreducibility is to construct the Wolfram Model of our Universe in a time-reversal way, using the forms of matter from the previous epochs of the Universe as black boxes. In the [chronology][3] of our Universe, we have the following epochs (in chronological order):
- Planck epoch
- Grand unification epoch
- Inflationary epoch
- Electroweak epoch
- Quark epoch
- Hadron epoch
- Neutrino decoupling
- Lepton epoch
- Big Bang nucleosynthesis
- Photon epoch
- Recombination
- Dark Ages
- Star and galaxy formation and evolution
- Reionization
- Present time
One way to progress in the Wolfram Physics Project is to develop a Wolfram Model in order to explain the transition from any cosmological epoch to the next one. For example, we could imagine papers with titles like "A Wolfram Model for the transition from the Quark epoch to the Hadron epoch". Finally, after recreating the history of our Universe by means of Wolfram Models of transitions between two consecutive epochs, the next challenge will be to write a paper with the title "A Wolfram Model for the time before the Planck epoch".
[1]: https://www.wolframscience.com/nks/p465--ultimate-models-for-the-universe/
[2]: https://www.wolframscience.com/nks/p737--computational-irreducibility/
[3]: https://en.wikipedia.org/wiki/Chronology_of_the_universe#Tabular_summaryJosé Manuel Rodríguez Caballero2020-08-08T15:27:18ZA formula for the number of updates per second that occur in our Universe
https://community.wolfram.com/groups/-/m/t/2052706
Assume that our Universe is described by a Wolfram Model. We want to estimate the number of updates per second. In order to do that, we propose the following experiment.
Assume that we can observe an aperiodic physical system involving elementary particles (in a lab or via astronomical methods) such that we can *estimate* its descriptive complexity at any given time (a periodic system with a *sufficiently large* period can also be used for this experiment). Suppose that during a time interval of dt seconds the system increased its descriptive complexity in dK bits (in general, the exact value is uncomputable, but we only want an estimation). Then, using the theorem from my previous post [logarithm of time = complexity][1], we can express the number of updates per second as the quotient
updates per second = 2^(dK) / dt.
Notice that, in a time interval dt, our aperiodic system increases its descriptive complexity the same amount as the whole Universe. This claim follows from the fact that in both cases, the only information needed in order to reconstruct their states at time dt after the initial conditions is precisely the descriptive complexity of the product of dt times the number of updates per second. Therefore, the formula above also provides an estimation of the number of updates per second of the Universe.
Before ending this post, I would like to point out that the method described here is not for an exact calculation, even if the premises are verified. This method is for *estimation* and it is not clear what is the bound of the error of measurement using the approach that we proposed.
I subject related to the present discussion, but that is not exactly the same, is the physical limit of computation. In this direction, I recommend Seth Lloyd's paper [Ultimate physical limits to computation][2].
[1]: https://community.wolfram.com/groups/-/m/t/2047389?p_p_auth=ymzGZe8D
[2]: https://cds.cern.ch/record/396654/files/9908043.pdfJosé Manuel Rodríguez Caballero2020-08-06T13:26:36Z