Please make sure to glance at https://community.wolfram.com/groups/-/m/t/2029731 project first. It looks for singularities in WMs and especially the ones that persist for at least 20 steps. Some very useful functions are used that look for the presence of singularities, filter WMs based on that criteria and also look at dimensionality of the system. Their conclusions reflect a disappointment of not finding a change in dimensions as they assume a Schwartz type of BH would have.
Before I dive into physics, let me add that I ran their 21 surviving BH models for a greater number of steps as I summarize in the attached picture (I tried to do 50 steps but some proved too computationally intensive while others I was able to run for 100s of iterations).
Four more models lost their singularities (bringing the total down to 17). Here are the questions we can still answer by looking further into this:
• Which remaining models BHs survive after 100, 200, 300 etc. steps?• Can models reacquire singularities after losing them?• If so, we need to map durations of BHs lifetimes and frequency of occurrence.• Write new function that can identify # of singularities in a given system as well as whether any of them are nested (BH inside another BH).
Now for the physics…
Given a tiny # of steps that can be run on these models we are probably looking at vacuum fluctuations on a very small scale. That makes it unlikely to observe any BHs form via gravitational collapse (not enough steps).
What are these singularities then? To me, they look like topological BHs that have nothing to do with gravitational collapse and whose stability depends on the rewriting rules alone. Now imagine that our expanding universe forms these sub-plank BHs that leech some of the spacetime into pocket universes. WMs show that nothing special happens in those regions and that they expand same as everywhere else.
Our own vacuum can have a specific signature of these topological BHs. Average density and duration can not only affect our cosmological constant but also be a dark matter candidate. Moreover, one could try and match one of the WMs to our own universe based on these criteria.
Sooner or later, certain interesting WMs will need to be placed on the server cloud with large number of steps computed and stored to be explored by the community.
There is much more to discuss here but it’s probably a good start.
Legend: WM = Wolfram Model | BH = Black Hole | sub-plank = reference to Steven’s belief that these “atoms of space” are much smaller thank plank scale
Interesting experiment. Please develop your ideas and publish some results in a journal of physics. Most of my publications are based on experiments made on a personal computer.
I was originally confused by how the BH code worked exactly to where I could not customize it for my purposes.
However, shortly after making this post I discovered that "CausalConnectionGraph" Function is robust enough to give the needed information. The problem I am facing now is attempting to use "WolframModelEvolutionObject" feature together with "CausalConnectionGraph" Function to save time on computation. Any help here would be appreciated, I can't seem to find any examples where those 2 features are used together.
Another thing I found out is that the step # at which comparison between events starts can matter a lot when it comes looking for singularities. While steps preceding step #5 are not necessarily important, later ones can give rise to BHs that only start as such at a certain iteration and not before. That means that a simple search for steps 5 to X is simply not a good way to do it, one must essentially look at every step below the maximum number computed.
I played with a few WPMs this way and found that rule 24528, for example, not only has nested BHs but it appears to be a never ending process. Instead of turtles all the way down it could be Black Holes all the way down.
Wolfram mentioned early on that causal invariance basically depends on multiple paths reconverging eventually. Well, by that definition, WMs that have persistent BHs are not causally invariant and theories such as GR would not work in that case. I am not sure it is absolutely true as you can have a mostly causally invariant system in the continuum limit while still persistently generate small BH patches that never resolve themselves. In fact, concepts of computational reducibility/irreducibility could be directly related to it.
Lee Smolin has a theory of cosmological evolution that contains nested BHs but I do not think this is the same thing, however, unless every nested BH will somehow experience big bang as well. Dark Matter, on the other hand, could be made of these tiny pin pricks of curved space, that is able to clump together with others of its kind.
I will be more than happy to further develop this though I could use some help with the code and was also hoping for some sort of a discussion on the subject.
Thank you Anton for the very interesting post. I made a service from searching singularity points from certain generations.
Singularity example from model 3975
You can test finding a singularity point by going to
gigabrain.io Search model
And searching a model with a number id. Then click on Videos button. There you can choose Analyse singularity from the list and enter the generations to examine to the input box in format:
5_20 where the first integer is the minimum generation and the second is the maximum generation.
And click on the "Generate video" button to launch also the singularity analysis. The analysis takes a time and automatically updates the finding to the screen below.
Some generations are taking time to analyse so choose carefully the generation numbers. If the update does not show email to firstname.lastname@example.org or use lower generations or generations close to each other like 6_7
And like you mentioned the results are also stored to cloud with larger steps and I was planning a method to examine the results and maybe this could be topic for more discussion.
Thank you for the discussion.
Hi Tuomas, appreciate the feedback!
I did test it on gigabrain this past week and it took me some time to figure out how the requests are processed and where they end up showing. I tried it on 2 WMs (24528 and 3975) and it does take a while (days) for higher number of steps, however if it saves it to a cloud it should not matter in the long run. Unfortunately, generations above 35 seems to give an error…
This XML file does not appear to have any style information associated with it. The document tree is shown below. <Error> Code>AccessDenied</Code
RequestId>JQEG9VE85B4NMT8A</RequestId HostId>kZeWl/i9YBPWU4n0G9GlJ+ODqY83BsuyAxKQZ7F6UHq+kptCjH1yA5RTVeoPjW99m9tsjS0IyuI=</HostId /Error>
Hopefully, this part can be fixed as all the interesting stuff will be happening at later generations. One idea/question I’ve had is the ability to upload computation directly into gigabrain for a particular model. Say I ran a search for singularities for 100 generations (5_100) all day on my computer and it finally gave me the results, perhaps those results can be uploaded directly instead of having to repeat the computation. I can see that some sort of authenticity check might be a good idea where it makes sure that the data matches the model number and a computation type.
As far as a general method of examining these singularities it might end up being a little tricky. Causal invariance that is so often mentioned in this Wolfram Physics Project is basically a claim that any divergence in the causal graph is always temporary. In the view of these singularities, I think the same claim means that none of them will persist for very long. One could come up with a new parameter called “causality” that describes how causally invariant it is at a certain number of generations. A value of 1 would indicate that every generation is causally invariant. Any singularities would then lower that value of 1 depending on how long they persist. That lowered value can be a ratio of Black Hole regions over total number (or rather the inverse of that). So, say a model 24528 was ran for 7 generations and has a total of 18 steps (partial generations). At the 5th generation we have a singularity form at step 7. At the 7th generation we can count total number of steps (18 in our case) and also steps that end up inside that singularity (7). Our “causality” parameter would be (1- 7/18) = 0.61 indicating overall tendency of the causal structure to recombine its steps. If Steven and Jonathan are right, then after large enough number of generations that value will approach 1. Nested singularities present an added difficulty here and perhaps should be treated as extra anti-causal hit to the value of 1. Unfortunately, all of this means that it would require a rigorous analysis of the causal graph. A graph of 100 generations would have to be not only ran as 1 _ 100, 2 _ 100, 3 _ 100….99 _ 100 but also ran at a lower number of steps to determine duration of singularities. That is a lot of computation! It would almost be easier to visually examine it with some sort of an AI algorithm. It would also have been if nice singularity regions could be highlighted even if the singularity itself went away at some later generation.
The most interesting part about all of this is that macroscopic causality appears to emerge from microscopic randomness (randomness in a sense that we cannot pick and choose which update will be applied to our foliation of space). Underlying rule will ultimately determine whether causal structure emerges at all and if it does at what scale (number of generations). Personally, I am hoping for a very tiny but persistent % of singularities at very high generations pointing to some interesting physics.
Thank you Anton for the through answer.
At that time there was many 3d video generations that took time from the queue and that's why it took so long for the generations to end up showing. Now all of the Wolfram models from the registry have a 3d video also generated and the queue is not that full at the moment.
Generation above 35 have so many edges in the graph that the analysis image and svg file are not produced and this is something I will fix in the following versions. The error showing was because the image file is not found from the analysis stage.
Your point is very interesting about the distributed computation. As your post mentioned I've made changes to have a new action called "Singularity scan". This scans the range and stores the amount of edges of each generation and number of causally connected edges for each compared generations. As you said every generation has to be compared and I made the function towards that.
And the analysis stage can be made only from generations that are possible to compare, that do not have edges above a certain limit.
After that I will look at the calculation of the "causality" parameter as you mentioned.
This weekend I will make more testing and in few days I'll have the scan part ready and I'll write more about that.