Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Science sorted by active[MENTORSHIP] Identify Cellular Automaton Classes with Machine Learning
https://community.wolfram.com/groups/-/m/t/900210
*NOTE: The complete notebook of this post is attached at the end.*
-----
## Introduction
During the time while I worked on this project, I learned not only a great deal about cellular automaton, but machine learning algorithms in general. Before I started the mentorship program, I had the opportunity to participate in the Wolfram Mathematica Summer Camp in 2015. There I became extremely interested in Mathematica and machine learning. This was a mentorship project since the topic of cellular automaton is quite difficult to learn alone. So with the help of a mentor, I was able to learn more about cellular automaton as well as work hands on with them. **Please see the attached file for all of the information.** Thanks.
##Background Information
###Cellular Automaton
Cellular automaton are defined as mathematical models for complex natural systems containing large numbers of identical components with local interactions. Automaton consist of a lattice of sites, each with a finite set of possible values. These values, the value of each site, evolve synchronously in discrete time steps according to identical rules. Additionally, the value of each site is determined by the previous values of the neighborhood sites around it. Even though these automaton are simply defined, it has been shown that they can consist of complex behaviors.
###Classes of Cellular Automaton
Stephen Wolfram proposes a class system. This system has four different classes: class I, II, III, and IV. Class I is defined to have very simple behavior, and that almost all initial conditions lead to exactly the same uniform final state of the cellular automaton. Some examples of class I cellular automata are rules 0, 32, and 160.
Class II cellular automaton show many different possible final states, but all consist of a certain set of simple internal structures that must either: (1) remain the same forever, or (2) repeat every few steps. Examples of this class are rules 4, 108, and 250.
Class III automaton have a more complex behavior. There seems to be an aspect of randomness, although triangles and other small-scale structures are seen at some levels. Examples are rules 22, 30, and 150.
Class IV involves a complex mixture of order and randomness. There are localized structures that are produced which seem fairly simple, but these structures move and interact in very complicated ways. An example of this class is rule 110.
##Part 1: Understanding the Behavior of all 4 Classes
Before creating a classifier, it is necessary to determine how to distinguish the 4 classes from each other. There are stark differences between classes 1 and 2, and classes 3 and 4. This difference is a mathematical difference so a test can be run on the automaton to check and see if it is class 1 or 2, or class 3 or 4. Examples of all 4 classes are shown below.
Image[CellularAutomaton[{32,{3,1},1},RandomInteger[2,400],400]/2]
This is an example of a class 1 cellular automaton.
![enter image description here][1]
Image[CellularAutomaton[{4,{3,1},1},RandomInteger[2,400],400]/2]
This is an example of a class 2 cellular automaton.
![enter image description here][2]
Image[CellularAutomaton[{22,{3,1},1},RandomInteger[2,400],400]/2]
This is an example of a class 3 cellular automaton.
![enter image description here][3]
Image[CellularAutomaton[{110,{3,1},1},RandomInteger[2,400],400]/2]
This is an example of a class 4 cellular automaton.
![enter image description here][4]
The test created checks to see if the automaton has a set cycle where the pattern in creates repeats itself. The following code excerpt shows how the test was created.
CyclicNormalize[list_]:=Sort[Table[RotateRight[list,i],{i,Length[list]}]][[1]]
CyclicCylcingQ[list_]:=MemberQ[CyclicNormalize/@Most[list],CyclicNormalize[Last[list]]]
CyclicCylcingQ[{x_}]:=False
CyclicCylcingQ[{}]:=False
EvolutionClass[rules_,x_,t_]:=Module[{h,k=2,res=1},
If[MatchQ[rules,{_,_Integer,___}],k=rules[[2]]];
If[MatchQ[rules,{_,{_Integer,1},___}],k=rules[[2,1]]];
Catch[Do[
h=CellularAutomaton[rules,RandomInteger[k-1,x],t];
If[Not[Apply[SameQ,h[[-1]]]||CyclicCylcingQ[h]],Throw[res="3 or 4"]];
If[Not[Apply[SameQ,h[[-1]]]],res=2],{3}];res]]
However, to determine the difference between classes 3 and 4, some image processing techniques were necessary. Additionally, the use of ClassifierFunction was necessary.
##Part 2: Identifying the Best Image Processing Functions to Build a Classifier
###Creating test images from all 3 color totalistic cellular automaton
As a base case for the project, all 3 color totalistic cellular automaton were used. There are a total of 2187 of them, and a few class 3 and class 4 CAs were picked out. The creation of the images is shown below along with the rules used.
classthrees={3,10,12,18,21,24,28,30,31,45,46,48,49,51,57,63,66,69,72,75,78,83,84,91,92,93,95,96,97,99,100,102,105,109,110};
classfours={15,34,65,69,88,99,133,136,148,153,157,203,228,231,248,258,262,266,294,331,379,397,458,553,593,629,797,801,805,914,963,964,966,967,997};
testimages=Image[CellularAutomaton[{#,{3,1},1},RandomInteger[2,400],400]/2]&/@Join[classthrees[[;;8]],classfours[[;;8]]];
testimages3=Image[CellularAutomaton[{#,{3,1},1},RandomInteger[2,400],400]/2]&/@Join[classthrees[[9;;]],classfours[[9;;]]];
testimages2=Image[CellularAutomaton[{#,{3,1},1},RandomInteger[2,400],400]/2]&/@Join[classthrees[[;;8]],classfours[[;;8]]];
testimages4=Image[CellularAutomaton[{#,{3,1},1},RandomInteger[2,400],400]/2]&/@Join[classthrees[[9;;]],classfours[[9;;]]];
###Creating a function to classify test images with a specific filter
The function created works on the set "testimages" and is able to use any image association and its specification. Additionally, an accuracy function was created in order to measure how accurate the filter was at identifying the cellular automaton. These functions are shown below.
ClassifyWithFilter[filter_,o:OptionsPattern[]]:=Classify[<|"3"->Map[filter,testimages[[;;8]]],"4"->Map[filter,testimages[[9;;]]]|>,o]
ClassifyWithFilter[filter_,imageassociation_,o:OptionsPattern[]]:=Classify[<|"3"->Map[filter,imageassociation[["3"]]],"4"->Map[filter,imageassociation[["4"]]]|>,o]
AccuracyTest0[filter_,classifier_,o:OptionsPattern[]]:=AccuracyTest0[filter,classifier,Automatic,"Accuracy",o]
AccuracyTest0[filter_,classifier_,imageassociation_,o:OptionsPattern[]]:=AccuracyTest0[filter,classifier,imageassociation,"Accuracy",o]
AccuracyTest0[filter_,classifier_,Automatic,measurement_,o:OptionsPattern[]]:=AccuracyTest0[filter,classifier,<|"3"->testimages3[[;;35-8]],"4"->testimages3[[35-8+1;;]] |>,measurement,o]
AccuracyTest0[filter_,classifier_,imageassociation_,measurement_,o:OptionsPattern[]]:=ClassifierMeasurements[classifier,
<|"3"->Map[filter,imageassociation[["3"]]],"4"->Map[filter,imageassociation[["4"]]]|>,measurement,o]
###Examples of how to use ClassifyWithFilter and the best Image Processing functions that were found
Rotate 45 Degrees
imageRotate45DegFilter=ImageRotate[#,45 Degree]&;
imageRotate45DegClassifier=ClassifyWithFilter[imageRotate45DegFilter];
AccuracyTest0[imageRotate45DegFilter,imageRotate45DegClassifier]
> 0.814815
Rain
rainFilter=ImageTransformation[#,Function[p,With[{C=150.,R=35.},{p[[1]]+(R*Cos[(p[[1]]-C)*360*2/R]/6),p[[2]]}]]]&;
rainClassifier=ClassifyWithFilter[rainFilter];
AccuracyTest0[rainFilter,rainClassifier]
> 0.796296
###Identity
identityFilter=Identity[#]&;
identityClassifier=ClassifyWithFilter[identityFilter];
AccuracyTest0[identityFilter,identityClassifier]
> 0.703704
##Part 3: The Errors
Throughout the whole extent of this project, there was one large error. Every time an accuracy test was run, the seed for generating the CAs changed. This made all accuracy tests unreliable. This was as far as the research was conducted due to time constraints. However, please feel free to pick up this project as it is quite promising.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=40241.png&userId=20103
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=29262.png&userId=20103
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=66783.png&userId=20103
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=28514.png&userId=20103Sasank Vishnubhatla2016-08-07T21:07:07ZSequencePredict: interesting behavior for CA rule plots
https://community.wolfram.com/groups/-/m/t/1281200
I submitted the following one-liner entry at the 2017 WTC one-liner competition, and I'd be meaning to ask this question since and kept forgetting:
s = Cases[RulePlot@CellularAutomaton[#], Inset[a_, __] -> a, ∞] &;
Join[f = Take[s@99, 1], SequencePredict[s /@ Range@98][f, "NextElement" -> 7]]
![enter image description here][1]
Essentially, s is a pure function which, when given an integer, returns a list of the 8 graphics objects which visualize the rules for the elementary cellular automaton with that number. This is illustrated below:
RulePlot[CellularAutomaton[30]]
GraphicsRow[s[30], Frame -> All]
![enter image description here][2]
A list of these is generated for the first 98 CAs (in theory skipping 0, but oh well we had to save characters), and passed into SequencePredict which uses a Markov Model to return a SequencePredictorFunction.
Here's how 5 of those sequences look like:
GraphicsGrid[s /@ Range[5]]
![enter image description here][3]
This is where things get interesting: given only the first rule image of the 99th CA, the predictor correctly 'guesses' the next 7 elements, returning the RulePlot for CA 99!
Looking at the image above, it's clear that the CAs were listed using some sort of loop, which is likely what the predictor picks up.
However the result was odd enough (since inherently the ruleplot is not really a sequence) to trigger my curiosity and I was wondering if anyone had a clearer explanation as to what is happening allowing it to predict correctly one element at a time.
Cheers,<br>
George
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=sr34gds.png&userId=11733
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-15at2.32.44AM.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=RulePlotsGrid.png&userId=616023George Varnavides2018-02-08T23:19:22ZGame of Life (Manual) Neural Network
https://community.wolfram.com/groups/-/m/t/1424749
The [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) have a very simple set of rules that can be summarized in Mathematica with two lines:
GameOfLifeRule[{1,2|3}|{0,3}] := 1
GameOfLifeRule[{center_Integer, sum_Integer}] := 0
If the center cell is alive (1) and it has 2 or 3 neighbors (sum), it will live (1). If the center cell is dead (0) but it has 3 neighbors (sum), it will live (1). Otherwise, all shall die (1).
In this post, we'll manually construct a Neural Network (NN) that can give the next state of the Game of Life. It will require no training whatsoever and we'll use only our knowledge of the rules of the game.
# First NN
Let's ''train'' a very simple NN that will able to tell what is the next state of the board given the `center` cell and the `sum` of its neighbors. First, we generate the data for the NN (only 18 elements).
data = Flatten@Table[
{center, sum} -> List@GameOfLifeRule[{center, sum}]
, {sum, 0, 8}, {center, 0, 1}]
And then create a very simple NN. The values of the `Weights` and `Biases` below were chosen after a 1-min simulation was run and then rounded. And then the layer `ElementwiseLayer[#^5 &]` was added to make the final output either 0 or 1 more sharply. The NN bellow doesn't need to be trained as it is already "trained".
net = NetChain[{
LinearLayer[2, "Weights" -> {{0,3},{-4,-4}},
"Biases" -> {-11,11}],
LogisticSigmoid,
LinearLayer[1, "Weights" -> {{-16,-16}},
"Biases" -> 8],
ElementwiseLayer[#^5 &],
LogisticSigmoid
}, "Input" -> 2, "Output" -> 1];
Let's test it:
Tally@Table[Round@net@d[[1]] == d[[2]], {d, data}]
(* Output *) {{True, 18}}
There is probably a clever way of doing the same, but it will suffice for now.
# Second NN
Now that we can predict the next state of a cell, we need to build an NN that can apply those rules to all the board.
A 3x3 convolution is the key of doing that. Look at the following two kernels:
$\begin{pmatrix}
1&1&1 \\
1&0&1 \\
1&1&1
\end{pmatrix}$ and $\begin{pmatrix}
0&0&0 \\
0&1&0 \\
0&0&0
\end{pmatrix}$
The first one gets the sum of the neighbors of the central cell while the other is just a duplicate of the central cell. Which is basically the inputs of the previous NN built.
So, in order to build an NN that can play the game of life, we need to run the two convolutions above and apply the previous NN to it, it can be done as:
netPlay[W_Integer, H_Integer] := NetChain[{
ConvolutionLayer[2, {3, 3}, "PaddingSize" -> 1,
"Weights" -> {
{{{0, 0, 0}, {0, 1, 0}, {0, 0, 0}}},
{{{1, 1, 1}, {1, 0, 1}, {1, 1, 1}}}
}, "Biases" -> None],
ReshapeLayer[{2, W*H}],
TransposeLayer[],
NetMapOperator[net],
ReshapeLayer[{1, H, W}]
},
"Input" -> NetEncoder[{"Image", {W, H},
ColorSpace -> "Grayscale"}],
"Output" -> NetDecoder[{"Image",
ColorSpace -> "Grayscale"}]
]
Where we have reshaped and transposed the layers so we could apply the previous net only at the channel's level.
Now we need to test our NN. To do it, we will build a function that generates a random Game Of Life board.
RandomLife[W_Integer, H_Integer] := Block[{mat, pad=2},
mat = ArrayPad[RandomInteger[1, {H, W}], pad];
mat[[1,1]]=mat[[-1,-1]]=1;
Rule @@ (ImagePad[Image@#, -pad] & /@ (
CellularAutomaton["GameOfLife", {mat, 0}, 1]))
]
Where we generate a random board and pad it, apply the Game of Life rules and then crop the result. This is needed since Mathematica changes the size of the board with the CellularAutomaton function, hence we use a trick of padding and adding a cell that will die in the next interaction, just to make sure the board will stay the same size. A very hacky way, but nonetheless, it works...
a = RandomLife[30, 20]
netPlay[30, 20][a[[1]]] - a[[2]]
![result][1]
Where we can see from the difference of both images that the NN can indeed predict the Game of Life rules.
Let's now apply it to a more realistic scenario. Gliders!
img = Image@ArrayPad[{{0,1,0},{0,0,1},{1,1,1}}, 10];
ListAnimate@NestList[(netPlay@@ImageDimensions@img), img, 45]
![Glider][2]
Notice that the Glider just dies at the corner of the board.
An interesting problem that we could pose is to write the problem backward. Given the current configuration, try to find a previous one, only using random images as a training set. It is well-known that the Game Of Life is not reversible, so this procedure it's not always possible. But it would be interesting to see what the NN would predict.
One could build such a neural network by feeding random images and then using convolutions to build the previous step, and then apply the previous network shown above to get back the input image and see the differences, in a kind of auto-encoder-way.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2018-08-26_173812.png&userId=845022
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rest.GIF&userId=845022Thales Fernandes2018-08-26T20:51:49Z