Please excuse me if this post is entirely too naive. I have a background in physics and computation, but don't feel nearly qualified enough to critique anything being said here. I just want to raise a very basic question, hope someone has some insight...
I'm totally fascinated by the potential for computational properties like causal invariance to map onto problems that have confounded foundations of physics since the discovery of QM. That potential alone is super exciting - and all the other very abstract analogues between physics and elements of these hyper-graphs are very interesting.
Setting these abstract considerations aside - I'm very interested in how a universe ultimately ends up having "stuff" in it. Stephen has described particles as persistent structures, and energy as a flux of edges through a time-like hypersurfaces, so far so good.
Given the huge discrepancy in the actual universe between the energy of space and the energy of any particle with actual mass - is it important that a rule exhibit "extreme globularity" for it to be an actually viable candidate? I don't know how else high-energy particles would be differentiated from the space between them if this wasn't the eventual structure.
If this reasoning is sound then the obvious next question would be: do all the ideas around the potential of hyper-graphs to explain theoretical physics still hold when we are talking about "mega-structures" in graphs, rather than individual nodes?
The notebook below shows a rule I happened upon that generates dense isomorphic neighborhoods connected by a relatively sparse set of edges ("space" presumably)... I've rasterized the images to help with file size.
Is there some other way to imagine how particles / space would present and differentiate themselves in a Wolfram graph?