Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Staff Picks sorted by active[WSS21] Implementing the statistical problem solving process
https://community.wolfram.com/groups/-/m/t/2312324
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/dc7c2396-7b19-4634-a06c-79c78ddf1a6cKimberly Gardner2021-07-13T17:16:48Z[WSS21] The 2-state cellular automaton with non-adjacent parent cells
https://community.wolfram.com/groups/-/m/t/2312836
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/2450f7ec-177c-4be6-aa73-8d6e49591368Alejandro Puga2021-07-13T18:46:34Z[WSC19] Automated spot the difference
https://community.wolfram.com/groups/-/m/t/1732399
Description
-----------
I created a [website][1] that automatically makes a spot-the-difference challenge from various images. When given an image, I first find the specific items within the picture using ImageContents. I then remove or edit each of those contents and put the edited subimage back onto the original image, or in the case of removal, use InPaint to cover the gap.
When the user arrives at the site, they choose a difficulty level, whether they would like one image or all of the images in that difficulty range, and an image. They can click to the very right in order to reveal the correct solution.
General Image Functions
-----------------------
The heart of the functionality in my project is ImageContents, a function which uses machine learning to find and identify things in a given image. Though ImageContents uses the same idea as ImageIdentify, it can find items in subimages and not only the entire image. For example, lets take the ImageContents of this elephant image. It finds each elephant separately.
![enter image description here][2]
One of the first problems I ran into was that Inpaint will sometimes use parts of the original image which contain other objects: for example, Inpaint might successfully cover the large elephant, but catch the small elephant in its mask, covering the large elephant with an unsettling floating baby elephant head.
![enter image description here][3]
In order to solve this, I created my first function: `mask` which solves the issue of not being able to auto-generate masks based solely on their bounding boxes.
boundingboxes[image_]:=Quiet[Check[
List @@@ Flatten@
Values@Normal@ImageContents[image, All, "BoundingBox"], {}]]
The next function I created is called `expunge`, which utilizes `mask` to Inpaint the image contents.
mask[{{col1_, row1_}, {col2_, row2_}}, {nrows_, ncols_},
previous_: None] :=
Table[If[nrows - row2 < row <= nrows - row1 + 1 &&
col1 < col <= col2, 1,
If[previous === None, 0, previous[[row, col]]]], {row,
nrows}, {col, ncols}]
The next function I made is called ImageMapAt. ImageTrim trims the contents out of the image and applies the given function to the trimmed contents. ImageMapAt uses ImageCompose to shove the edited contents back into the original image, one at a time. Also, ImageMapAt uses the parameters {Left,Top} to align the edited contents correctly when pasting them back into the image, so it doesn't return outputs like this:
![enter image description here][4]
## Image Difference Functions ##
Once all my general functions were working, my next step was to create functions to be applied to the images to automate the differences. These functions were based on built-in functions in the Wolfram Language, but specified to visible yet not too obvious variations. These include `blurred`, which blurs an image by 2 units, `darker` which darkens dominant colors, `aquarecolor` which replaces a dominant color with part of the gradient Aquamarine, and various others.
## Displaying Image Differences ##
Once I had created differences, my next step was to display them. The idea is simple, merely subtract the original image from the one with differences. The difficult part was finding a way to display them nicely as the default had the differences placed on a black background making them hard to see and the ColorNegated version of it was too bright, so I used ImageAdjust. Here are some of the results.
![enter image description here][5]
## Image Distance Sorting ##
Now that I had created and displayed the differences, I sorted the images into the order of larger changes to smaller ones using ImageDistance, because the larger the distance is, the bigger the change. For example, compare these two distances. Wouldnt you agree the first one is roughly ten times easier than the second?
![enter image description here][6]
![enter image description here][7]
## Website Code ##
I then created LibraryAdd, a function that is used to add images into my curated image library. Lastly, I created my website. The basic idea is that next to the original (left) and the new image (right), there will be a blank/white image that I refer to as "blank." The user clicks on the "empty space" once they think they know the difference, and then the correct difference is shown. If it is clicked again, the image returns to its original blank state. The code is shown below.
CloudDeploy[
FormFunction[{"difficulty" -> {"easy" -> {0, 1/4},
"medium" -> {1/4, 1/2}, "hard" -> {1/2, 3/4},
"impossible" -> {3/4, 1}}, "selection" -> {"random", "all"},
"image" -> (CloudImport[First@# <> "/1.png"] -> First[#] & /@
Select[CloudObjects[
"gallery"], ! StringContainsQ[First@#, ".png"] &])},
With[{total = Length[CloudObjects[#image <> "/"]]/2,
original = #image <> "/1"},
Grid[With[{orderly =
Table[With[{new = #image <> "/" <> ToString[i]},
"<img height=\"200px\" src=\"" <> If[#2, blank, #] <> "\"" <>
If[#2,
" onclick=\"javascript:this.src=(this.src=='" <> # <>
"' ? '" <> blank <> "' : '" <> # <> "');\"", ""] <>
">" & @@@ {original <> ".png" -> False,
new <> ".png" -> False, new <> "a.png" -> True}],
{i, TakeQuantile[Range[2, total], #difficulty]}]},
If[#["selection"] === "random", {RandomChoice[orderly]},
orderly]]]] &,
AppearanceRules -> <|"Title" -> "Spot the Difference",
"Description" ->
"Choose a difficulty, whether you would like to see the \
solutions or not, and an image, then hit submit to try to spot the \
difference between the original image on the left and the edited one \
on the right. <i><a \
href=\"mailto:stella@maymin.com\">stella@maymin.com</a></i>"|>], \
"spot.me", Permissions -> "Public"]
## Future Work ##
In the future, as an extension of my website, I would like to find a way to create more of a game, perhaps by using dynamics. Another extension may be to create a timer counting in seconds how long it took the user to click the correct difference, or a counter of how many incorrect clicks on the Locator there were. Using those high scores, I think it might be possible to create a Leaderboard across all users. This would create a competition between users all over the world, so they can try to beat each others high scores in the race to the top. Lastly, a simpler extension is to create the option of allowing the user to upload their own images, which would be saved in my master library. This would let users play with images others have uploaded.
## Acknowledgements ##
I would like to thank my mentor, Rory Fougler, for helping me with my code every step of the way. It really made a huge difference. I would also like to thank Chip Hurst for providing various insights and helpful functions throughout my project. Lastly, I would like to thank Mads Bahrami, Kyle Keane, and Anna Musser for making this entire experience truly unforgettable. Overall, I am so grateful for my knowledge and capabilities now, thanks to my stay here at the Wolfram High School Summer Camp.
[1]: https://www.wolframcloud.com/obj/stella/spot.me
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageContentselephants.png&userId=1720407
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=OopsPost.png&userId=1720407
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=oopsididitagainpost.png&userId=1720407
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=orderlydiffs.png&userId=1720407
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageDistance01.PNG&userId=1720407
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageDistance03.PNG&userId=1720407Stella Maymin2019-07-12T01:28:57ZClassifying stocks by their price volatility
https://community.wolfram.com/groups/-/m/t/2333494
![Percent Change over Time of Tesla, Amazon, and Ford][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=StockVolatility.png&userId=2317066
[2]: https://www.wolframcloud.com/obj/0435cfb6-cd1d-495e-8419-76323ffdd81eArshaan Sayed2021-08-03T21:23:02Z[WSS21] Expression difference
https://community.wolfram.com/groups/-/m/t/2312810
![ExpressionDifference][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=diff.png&userId=2253909
[2]: https://www.wolframcloud.com/obj/3d68e9ac-8174-4528-8657-e8af79d6f98dAman Dewangan2021-07-13T17:59:20ZSimulating brain tumor growth with diffusion-growth model
https://community.wolfram.com/groups/-/m/t/294122
![enter image description here][5]
When playing with Mathematica 10 I constructed this very simple example of an application of the NDSolve command, which I wanted to share. The objective is to model the growth of a special kind of brain tumour which affects mainly glial cells in a highly simplified way. I follow modelling ideas discussed in the excellent book ["Mathematical Biology" (Vol 2) by J.D. Murray][1]. It turns out that Gliomas, which are neoplasms of glial cells, i.e. neural calls capable of division, can be be modelled by a rather simple diffusion-growth model.
$$\frac{d c}{dt}=\nabla\left(D(x) \nabla c \right)+ \rho c$$
where c is the concentration of cancer cells and $D(x)$ is the diffusion coefficient, which depends on the coordinates; $\rho$ models the growth rate of the cells. The following boundary condition has to be observe (even though will be ignored in the model I use later on):
$${\bf n} \cdot D(x) \nabla c = 0 \qquad \text{for}\; x\; \text{on}\; \partial B.$$
In reality the diffusion coefficient will depend on the tissue type, i.e. gray matter vs white matter. I will use an image from a CT can to describe the different densities of the tissue instead.
![enter image description here][2]
In the book by Murray great care is taken to estimate the diffusion coefficient but I just want to show the principle here. I use the attached file "brain-crop.jpg" and import it:
img2=Import["~/Desktop/brain-crop.jpg"]
Then I sharpen it and convert it to gray-scale.
img3 = Sharpen[ColorConvert[img2, "Grayscale"]]
Then I use that image to determine the diffusion coefficient, locally:
diffcoeff = ListInterpolation[ImageData[img3], InterpolationOrder -> 3]
I should now determine the boundaries using something like EdgeDetect. As the background is black and allows no diffusion at all, we can simplify this by just setting a larger (rectangular) boundary box like so:
boundaries = {-y, y - 1, -x, x - 1};
\[CapitalOmega] =
ImplicitRegion[And @@ (# <= 0 & /@ boundaries), {x, y}];
Next we can solve the ODE on the domain:
sols = NDSolveValue[{{Div[1./500.*(diffcoeff[798.*x, 654*y])^4*Grad[u[t, x, y], {x, y}], {x, y}] - D[u[t, x, y], t] + 0.025*u[t, x, y] == NeumannValue[0., x >= 1. || x <= 0. || y <= 0. || y >= 1.]}, {u[0, x, y] == Exp[-1000. ((x - 0.6)^2 + (y - 0.6)^2)]}}, u, {x, y} \[Element] \[CapitalOmega], {t, 0, 20}, Method -> {"FiniteElement", "MeshOptions" -> {"BoundaryMeshGenerator" -> "Continuation", MaxCellMeasure -> 0.002}}]
Note that we start with an initially Gaussian distributed tumour and describe its growth from there. Also I took the fourth power of the diffcoeff function, which changes the relation between grayscale and diffusion rate. You can change the coefficient to get different patterns for the growth. Interestingly, this integration gives a warning about intersecting boundaries in MMA10, which it did not say in the Prerelease version; if someone can fix that, that would be great. For any time we can now overlay the resulting distribution onto the CT image:
ImageCompose[img3, {ContourPlot[
Max[sols[t, x, y], 0] /. t -> 2, {y, 0, 1}, {x, 0, 1},
PlotRange -> {{0, 1}, {0, 1}, {0.01, All}}, PlotPoints -> 100,
Contours -> 200, ContourLines -> False, AspectRatio -> 798./654.,
ColorFunction -> "Temperature"], 0.6}]
This should give something like this:
![enter image description here][3]
Using
frames = Table[
ImageCompose[
img3, {ContourPlot[
Max[sols[d, x, y], 0] /. d -> t, {y, 0, 1}, {x, 0, 1},
PlotRange -> {{0, 1}, {0, 1}, {0.01, All}}, PlotPoints -> 100,
Contours -> 200, ContourLines -> False,
AspectRatio -> 798./654., ColorFunction -> "Temperature"],
0.6}], {t, 0, 10, 0.5}];
we get a list of images,
![enter image description here][4]
which can be animated
ListAnimate[frames, DefaultDuration -> 20]
to give
![enter image description here][5]
This is only a very elementary demonstration, and certainly still far away from a "real" medical application, but it demonstrates the power of NDSolve and might, in a modified form, be useful as a case study for some introductory courses.
Cheers,
Marco
[1]: http://www.springer.com/new+&+forthcoming+titles+%28default%29/book/978-0-387-95228-4
[2]: /c/portal/getImageAttachment?filename=brain-crop.jpg&userId=48754
[3]: /c/portal/getImageAttachment?filename=BrainTumor-still.jpg&userId=48754
[4]: /c/portal/getImageAttachment?filename=1473BrainTumor-frames.jpg&userId=48754
[5]: /c/portal/getImageAttachment?filename=BrainTumor.gif&userId=48754Marco Thiel2014-07-14T13:54:54ZReproducing a generalization of the logistic map bifurcation diagram
https://community.wolfram.com/groups/-/m/t/2332065
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/385b0ba5-9e11-4f7a-98b1-9ed196a87010Christophe Favergeon2021-08-01T13:09:37ZSimplify 3rd order ODE to v''' + Z v = F a general formula
https://community.wolfram.com/groups/-/m/t/2332816
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/82028527-34c0-4dad-8b99-2de83055faa0Rauan Kaldybaev2021-08-02T13:26:57ZUsing the Gurobi Optimizer in the Wolfram Language
https://community.wolfram.com/groups/-/m/t/2333107
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/54323170-e51a-47ca-a4d4-3516daa04d8dArnoud Buzing2021-08-02T17:06:46ZTraining a recurrent neural network (RNN) to generate piano music
https://community.wolfram.com/groups/-/m/t/2328597
> **GitHub Repository:** https://github.com/alecGraves/Howl
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/da6d3b76-7bbf-4a55-9fcd-058e7f1e17bfAlec Graves2021-07-27T14:09:12ZBrain haemorrhage diagnosis: using LeNet based deep learning model
https://community.wolfram.com/groups/-/m/t/2273879
Introduction
------------
Brain haemorrhage is a type of stroke. It's caused by an artery in the brain bursting and causing confined bleeding in the enclosing tissues. This bleeding kills brain cells.
The Greek root for blood is hemo. Haemorrhage means "blood bursting forth." Brain haemorrhages are also called cerebral haemorrhages, intracranial haemorrhages, or intracerebral haemorrhages.
Cerebral haemorrhage deems for about 13% of all strokes in the United States. It is the next foremost cause of stroke. (The principal cause of stroke is a blood clot – thrombus – in an artery in the brain, which blocks blood flow and cuts off required oxygen and nutrients to the brain.)
Importing Dataset
------------
I implied a data set of haemorrhage and non-haemorrhage brain from Kaggle. Each class placed in its corresponding variable.
infected=FileNames["*.png","E:\\COURSES\\Wolfram\\BrainTumorImagesDataset\\training_set\\hemmorhage_data"];
uninfected=FileNames["*.png","E:\\COURSES\\Wolfram\\BrainTumorImagesDataset\\training_set\\non_hemmorhage_data"];
Constructing File Objects for Images
------------
I wanted to match each brain image with a value of either true for Hemorrhage or false for no Hemorrhage. To improve the efficiency of the importation, I formulated separate file objects for each of the image variables. Each variable contained 70 images for parasitized and uninfected cells.
infectedIMG = File /@ infected;
uninfectedIMG = File /@ uninfected;
Then I created a record of 70 true and false values which would be used to be matched up with their respective images. I set these lists in variables and made another variable to connect the list of true and false values along with another variable that connected the infected and uninfected file objects.
Length[infectedIMG]
70
infectedvalues=Table[True,Length[infected]];Length[uninfected]
70
uninfectedvalues=Table[False,Length[uninfected]];
Finally, using the AssociationThread function, I associated the images with their values and divided the data into two groups, 75% for training and 25% for validation.
data=RandomSample[AssociationThread[infectedIMG->infectedvalues]];
traininglength=Length[data]*.75
52.5
trainingdata=data[[1;;52]];validationdata=data[[53;;]];
Creating the Neural Network
------------
I then started to work on the Neural Network, which used MNIST image classification. The network's goal is to classify uninfected and infected using true and false to describe whether the patient suffers Brain Haemorrhage or not. I built a NetChain function that had multiple layers. One striking layer is the Resize layer which changes the image dimensions of each image to 135 by 135. This changes the images to comply with the sensitivity of the neural network to the size of images. Further layers include the convolution layer, ramp, and pooling layer, which all work to narrow down pieces and create categories to classify each image to associate them.
dims={135,135}
{135,135}
lenet=NetChain[{ResizeLayer[dims],ConvolutionLayer[20,5],Ramp,(*Takesoutthethenotusefulfeatures*)PoolingLayer[2,2],(*Downsamples*)ConvolutionLayer[50,5],Ramp,(*Takesoutthethenotusefulfeatures*)PoolingLayer[2,2],(*Downsamples*)FlattenLayer[],500,(*Makesfeaturesintofeaturevector"*)Ramp,2,(*Takesoutthethenotusefulfeatures-Trueorfalse*)SoftmaxLayer[]},(*Turnsthevectorintoprobabilities*)"Output"NetDecoder[{"Class",{True,False}}],(*Tensorintotrueorfalse*)"Input"NetEncoder["Image"](*Turnsimageintonumbers*)]
Training the Neural Networks with NetTrain
------------
I trained the neural nets with 10 training rounds.
results=NetTrain[lenet,Normal[trainingdata],All,ValidationSet->Normal[validationdata],MaxTrainingRounds->10,TargetDevice->"CPU"]
![NetTrain Result][1]
Training the Neural Network with Augmented Layers
------------
Next I implemented an ImageAugmentationLayer, which randomly crops images to create new data sets to improve my neural network.
augment = ImageAugmentationLayer[{135, 135}, "Input" -> NetEncoder[{"Image", {139, 139}}], "Output" -> NetDecoder["Image"]]
I made the images 139 by 139 and allowed the augmentation layer to crop the images by 4 pixels at random within the constraints of the dimensions of 135 by 135.
dims2 = {139, 139}
lenet2 = NetChain[{ResizeLayer[dims2],
ImageAugmentationLayer[{135, 135}], ConvolutionLayer[20, 5], Ramp,
PoolingLayer[2, 2], ConvolutionLayer[50, 5], Ramp,
PoolingLayer[2, 2], FlattenLayer[], 500, Ramp, 2, SoftmaxLayer[]},
"Output" -> NetDecoder[{"Class", {True, False}}],
"Input" -> NetEncoder["Image"]]
I trained this data using the neural net, with only 7 layers on CPU.
results2 =
NetTrain[lenet2, Normal[trainingdata], All,
ValidationSet -> Normal[validationdata], MaxTrainingRounds -> 7]
![enter image description here][2]
Creating a Testing Set for Data
------------
![enter image description here][3]
Data Visualization
------------
Lastly, I made a ConfusionMatrixPlot using the Classifier Measurements function which compares the neural networks predicted class against the actual class result.
![enter image description here][4]
Conclusion
------------
I built a neural network that strongly diagnosed Brain Haemorrhage with an accuracy of about 99.000000078%. Furthermore, as displayed in the ConfusionMatrix, there were 18 examples of the neural network prediction matching with the actual results for true and 18 examples of the neural network and actual matching for true.
Future Improvements
------------
To additionally enhance this project, I could implement more augmented datasets to further train and enhance the neural net. Moreover, I could use various images from different datasets to prevent overfitting and improve efficiency. Lastly, I could execute a function that pinpoints Brain Haemorrhage by finding the edges of the Haemorrhage area and sensing the infected cells through the function, image distribution and colour detection.
&[Wolfram Notebook][5]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&userId=2253909
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&userId=2253909
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=2253909
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=2253909
[5]: https://www.wolframcloud.com/obj/f4d8f183-800b-4e6c-993f-6ee5d572e29aAman Dewangan2021-05-22T17:23:02ZJuggling in space using Archimedean spirals and complex algebra
https://community.wolfram.com/groups/-/m/t/2331903
*SUPPLEMENTARY WOLFRAM MATERIALS for ARTICLE:*
> Adam Dipert, R. (2021).
> Choreographic techniques for human bodies in weightlessness
> Acta Astronautica. 182: 46-57. https://doi.org/10.1016/j.actaastro.2021.02.001
> [Full related videos playlist][1]
&[Wolfram Notebook][2]
[1]: https://youtube.com/playlist?list=PLKhOZ0nVFPlFm799hQMJv9ynBo3vsMxf3
[2]: https://www.wolframcloud.com/obj/7532ce3d-fe0e-4503-9ab3-fc2643e6dbd0R. Adam Dipert2021-07-31T15:41:17ZPeaceful chess queen armies
https://community.wolfram.com/groups/-/m/t/2330564
![enter image description here][1]
I was solving the Rosetta problem [Peaceful chess queen armies][2] just now. The idea is to place q armies of queens, each m large, on a n-by-n chess board such that one army of queen can not capture any other queen. The task there ask us to place 2 armies of each 4 queens on a 5*5 board:
ClearAll[ValidSpots, VisibleByQueen, SolveQueen, GetSolution]
VisualizeState[state_] := Module[{q, cells, colors,},
colors = DeleteCases[Union[Flatten@state[[All, All, "q"]]], -1];
colors = Thread[colors -> (ColorData[106] /@ Range[Length[colors]])];
q = MapIndexed[
If[#["q"] == -1, {},
Text[Style[#["q"], 20, #["q"] /. colors], #2]] &, state, {2}];
cells =
MapIndexed[{If[OddQ[Total[#2]], FaceForm[],
FaceForm[GrayLevel[0.8]]], EdgeForm[Black],
Rectangle[#2 - 0.5, #2 + 0.5]} &, state, {2}];
Graphics[{cells, q}, ImageSize -> Length[First@state] 30]
]
ValidSpots[state_, tp_Integer] := Module[{vals},
vals =
Catenate@
MapIndexed[
If[#1["q"] == -1 \[And] DeleteCases[#1["v"], tp] == {}, #2,
Missing[]] &, state, {2}];
DeleteMissing[vals]
]
VisibleByQueen[{i_, j_}, {a_, b_}] :=
i == a \[Or] j == b \[Or] i + j == a + b \[Or] i - j == a - b
PlaceQueen[state_, pos : {i_Integer, j_Integer}, tp_Integer] :=
Module[{vals, out},
out = state;
out[[i, j]] = Association[out[[i, j]], "q" -> tp];
out = MapIndexed[
If[VisibleByQueen[{i, j}, #2], <|#1,
"v" -> Append[#1["v"], tp]|>, #1] &, out, {2}];
out
]
SolveQueen[state_, toplace_List] :=
Module[{len = Length[toplace], next, valid, newstate},
If[len == 0,
tmp = state;
Print[VisualizeState@state];
Abort[];
,
next = First[toplace];
valid = ValidSpots[state, next];
Do[
newstate = PlaceQueen[state, v, next];
SolveQueen[newstate, Rest[toplace]]
,
{v, valid}
]
]
]
GetSolution[n_Integer?Positive, m_Integer?Positive, numcol_ : 2] :=
Module[{state, tp},
state = ConstantArray[<|"q" -> -1, "v" -> {}|>, {n, n}];
tp = Flatten[Transpose[ConstantArray[#, m] & /@ Range[numcol]]];
SolveQueen[state, tp]
]
GetSolution[5, 4, 2]
![enter image description here][3]
Notice that no queen of army 1 can capture any queen of army 2 (and vice versa).
But we can go much beyond that, let;s check what can happen for a given chess board size how many we can place for the case of 2 armies:
GetSolution[3, 1]
GetSolution[4, 2]
GetSolution[5, 4]
GetSolution[6, 5]
GetSolution[7, 7]
![enter image description here][4]
We can also look at more than 2 armies, let's look at 3 armies:
![enter image description here][5]
There are many more things to explore: not only square chessboard but also rectangular chessboard, more colors, other chess pieces…
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=sadf34dfab.jpeg&userId=11733
[2]: http://www.rosettacode.org/wiki/Peaceful_chess_queen_armies "Peaceful chess queen armies"
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2021-07-29at23.38.49.png&userId=73716
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2021-07-30at00.09.00.png&userId=73716
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2021-07-30at00.34.00.png&userId=73716Sander Huisman2021-07-29T22:35:36ZEpidemic simulation with a polygon container
https://community.wolfram.com/groups/-/m/t/1901002
*MODERATOR NOTE: coronavirus resources & updates:* https://wolfr.am/coronavirus
----------
![enter image description here][1]
&[WolframNotebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3687ezgif.com-optimize.gif&userId=11733
[2]: https://www.wolframcloud.com/obj/c443430b-2f0e-461a-ad95-801016802147Francisco Rodríguez2020-03-18T01:05:06Z