Group Abstract Group Abstract

Message Boards Message Boards

Project L board game analysis

Posted 3 years ago
POSTED BY: Brad Klee
10 Replies

A few remarks on the Project L side of the analysis (thus, mostly unrelated to the tiling problem and optimization of algorithms for that).

Now for some insight toward winning the game, let's define efficiency with regard to gaining points.

Gaining points is only half of the story in Project L. In my experience, during most a typical playthrough, gaining new polyomino pieces is a relevant considersation for almost the entire game - with the exception of perhaps the last two, or, at most, three rounds.

Sidetrack: This makes me wonder how long a typical game is. Recall that the final round is triggered when the stack of invisible black puzzle pieces is exhausted. While for lower player counts, some black puzzles do not come into play (the black puzzle stack is reduced by 2 * (4 - # of players) IIRC). I suspect larger player counts make for shorter games; which means the end-game is, correspondingly, shorter.

Back on track: for most of the game, getting new pieces is thus relevant. In fact, in the early game, building up your polyomino engine should dominate consideration regarding the point-value of pieces. The relative value of new pieces versus points declines smoothly as the game goes on. However - and in this, Project L works very similarly to a "classic" engine-builder like Dominion - the speed of transition depends very much on what other players do; specifically, how quickly they are snatching up black puzzles. Other than the competition for valuable puzzle pieces (white and black), this is the truly interactive component of Project L (and, for me, the more interesting one).

Maybe thus a more rigorous analysis then should just focus on solo (whose rules elude me at the moment). In any case, calculating value of a puzzle should include consideration of three parameters that you highlight: 1) point value 2) reward piecevalue 3) time and complexity ('ease to complete').

Your efficiency analysis highlights that there is a discrete jump in hardness-to-complete between puzzles that can be completed with n tetrominos versus those that need n tetrominos plus one monomino or domino. Note however that all these "best" 5-pointers give only a monomino as a reward! It's all probably so by design, and the low piece reward is probably meant to compensate for the high 'efficiency'.

Going back to thinking about a value function for puzzles, it is clear that this also needs to include the determination of value for each polyomino piece. For example, one would expect that an L-piece to be more useful than a straight. Additionally, pieces that can be acquired more rarely as rewards should be therefore more valuable. This all looks like a sort of eigenvalue-problem.

Ignoring one's currently available pieces and half-completed puzzles (which is a very big ignore!), the value of a piece should look something like v=v(points,rewardPiece,complexity,t), where t is the mean number of remaining rounds until game end. My contention is then that dv/dpoints is increasing in t, while dv/drewardPiece is decreasing in t. In fact, since one generally acquires more and more diverse pieces as the game goes by, the absolute value of dv/dcomplexity should also decrease over time.

I'll take a look at the single-player rules, and see if I can get some specific calculations done.

POSTED BY: Zsombor Méder
Posted 1 year ago

Ignoring one's currently available pieces and half-completed puzzles (which is a very big ignore!), the value of a piece should look something like v=v(points,rewardPiece,complexity,t), where t is the mean number of remaining rounds until game end. My contention is then that dv/dpoints is increasing in t, while dv/drewardPiece is decreasing in t. In fact, since one generally acquires more and more diverse pieces as the game goes by, the absolute value of dv/dcomplexity should also decrease over time.

I'll take a look at the single-player rules, and see if I can get some specific calculations done.

This sounds about right to me. Rather than jumping straight to the solo-rules, which I think could be comparably difficult to the multiplayer rules, perhaps we can try a controlled experiment. Here's what I'm thinking, a mini-game called "Project L: 0-1-2-3-4-5":

  • The Project-L game cards are sorted into piles according to points, one is drawn at random from each pile, and the six cards are stacked faces hidden from 5 on bottom to 0 on top.
  • The mini-game is for one player, or possibly multiple players in parallel.
  • Player(s) seek to complete their card stack in as few turns as possible.
  • The zero point card can be passed.
  • The turn-action rules are the typical ones.
  • For one-player, the win condition is finishing in the minimal number of turns.
  • For two-players, the winner completes the stack in fewer turns.
  • In case of a tie, all sorts of breaker options are available
  • Scoring by pieces, minimum physical time, etc.

There are a few reasons behind this mini-game. Since the state space is strictly limited, the multiway graph of all possible plays could possibly be enumerated, especially using branch and prune. This would require considering at most about a half-million plausible solutions, heavily weighted to the last two or three levels. A lot of pruning can be done by the fact that solutions with too many pieces take much longer to play. This sounds like a challenging problem to solve, but I think it's do-able. If we try and fail to get exact results, at least we're starting something...

In total, there are over 300,000 possible six-card sets, and that leaves room for variation of difficulty. Once a solver is made for one particular game, it can be mapped over examples, and the examples could then be compared for difficulty. This would be an excellent chapter for the games book, I think.

Also worth mentioning, I have solved all the Project-L cards, but ended up getting side-tracked and possibly never found an opportunity to report all results:

results

we can talk more about this soon...

POSTED BY: Brad Klee

POSTED BY: Zsombor Méder
Posted 2 years ago

Hi Victor, Thanks for your interest!

We are slowly but surely moving ahead with this project, and FindExactCover should go through WFR pretty soon. The final write-up will be more interesting than these basically public-facing notes I have here, because we're starting to understand why timing statistics look the way they do. Unfortunately, we still haven't succeeded in compiling Knuth's Dancing Links X, which is the main outstanding "To do" item on this mini-project.

That's nice that you figured out how to go faster making the cover matrices. I also improved this code while I was running more tests on the actual solvers (where the bottleneck really is):

PieceRotations[True, True][piece_] := Union[
  ResourceFunction["ArrayRotations"][piece]]

PieceRotations[True, False][piece_] := Union[
  ResourceFunction["ArrayRotations"][piece, False]]

PieceRotations[_, _][piece_] := {piece}

PieceTranslates[piece_, shift_, dims_
  ] := With[{inset = PadRight[PadRight[#,
        dims[[2]], 0] & /@ piece,
     dims[[1]], {ConstantArray[0, dims[[2]] ]}]},
  Catenate[Outer[RotateRight[inset, {#1, #2}] &,
    Range[0, shift[[1]] ], Range[0, shift[[2]]], 1]]]

PartialCoverRows[tfRot_, tfRef_
   ][template_, piece_] := Module[{
   tot = Total[piece, 2],
   zeroes = Position[template, 0],
   rot = PieceRotations[tfRot, tfRef][piece],
   dims = Dimensions[template], shift, candidates},
  shift = Map[dims - Dimensions[#] &, rot];
  candidates = Catenate[MapThread[
     If[MemberQ[Sign[#2], -1],
       Nothing, PieceTranslates[#1, #2, dims]] &,
     {rot, shift}]];
  Select[Outer[#1[[Sequence @@ #2]] &,
    candidates, zeroes, 1],
   Total[#] == tot &]]

MatricesToCoverMatrix[template_, pieces_,
  OptionsPattern[{
    "Rotations" -> True,
    "Reflections" -> True,
    "PlaceOne" -> False
    }]] := If[TrueQ[OptionValue["PlaceOne"]],
  Catenate[MapThread[Function[{piece, ind},
     Join[ind, #] & /@ PartialCoverRows[
        OptionValue["Rotations"], OptionValue["Reflections"]
        ][template, piece]],
    {pieces, IdentityMatrix[Length[pieces]]}]],
  Catenate[PartialCoverRows[
       OptionValue["Rotations"], OptionValue["Reflections"]
       ][template, #] & /@ pieces]] 

Would you want to see this as a WFR? If yes, can you think of a better name for it?

It should be about as fast as what you have, but it's also keyed in with some options and better free space recognition. The design is so that it's easy to make the cover matrix for "Scott's pentomino problem" from Knuth's arxiv article:

pentominoProblem = MatricesToCoverMatrix[
   ArrayPad[ConstantArray[1, {2, 2}], 3, 0],
   upToPentominos[[5]], "PlaceOne" -> True];

Dimensions@pentominoProblem

Out[]= {1568, 72} 

We can find all solutions of this matrix, but our times are still a little slow compared to industry standard (due I think to not having compiled code). Assuming you can make this matrix, what's your fastest time for solving it with Mathematica code?

POSTED BY: Brad Klee

Hi Brad, what a great article! It was a pleasure to read. And as much as I enjoyed understanding the magic in constructing the PolyominoCoverMatrix (and I learned quite a few valuable techniques in the process!) - we can simplify this code and remove a 13.1 dependency.

The construction of the cover matrix is pretty complicated. First, we start from a 2d binary array describing the form and then follow a series of transformations: binary array -> (via ArrayMesh and MeshPrimitives) polyomino graph -> (via FindIsomorphicSubgraph and FindCycle) sorted list of 4-cycles, which we use to create the cover matrix. However, since every four-cycle essentially represents an original cell on the form, we made all this journey only to return to a binary list representing the form.

We can accomplish the same more directly:

f = {{1, 0, 1, 1}, {1, 1, 1, 0}, {0, 0, 1, 1}};

embeds[f_, {n_, m_}] := 
 Module[{d1 = Dimensions@f, board = ConstantArray[0, {n, m}], e},
  e = ConstantArray[board, {n, m} - d1 + {1, 1}];
  Table[e[[i, j]][[i ;; i + d1[[1]] - 1, j ;; j + d1[[2]] - 1]] = f, {i, n - d1[[1]] + 1}, {j, m - d1[[2]] + 1}]; 
  Flatten[e, 1]]

Grid@Partition[ArrayPlot[#, ImageSize -> Tiny] & /@ embeds[f, {5, 5}] , 2]

embeds of the form in 5x5 matrix

allEmbeds[f_, board_] := 
 DeleteDuplicates@Flatten[embeds[#, board] & /@ ResourceFunction["ArrayRotations"][f],1]

Grid@Partition[ArrayPlot[#, ImageSize -> Tiny] & /@ allEmbeds[f, {5, 5}], 6]

enter image description here

PolyominoCoverMatrixFast[forms_, {m_, n_}] := 
 Flatten[Function[{f}, Flatten /@ allEmbeds[f, {m, n}]] /@ forms, 1]

In[85]:= IsomorphicGraphQ[
 CoverGraph[
  PolyominoCoverMatrix[{form}, PolyominoGraph[ArrayMesh[ConstantArray[1, {6, 6}]]]]],
 CoverGraph[PolyominoCoverMatrixFast[{f}, {6, 6}]]]

Out[85]= True

The new method is also faster, which justifies the name :)

In[86]:= PolyominoCoverMatrix[{form}, PolyominoGraph[ArrayMesh[ConstantArray[1, {6, 6}]]]]; // AbsoluteTiming

Out[86]= {0.151714, Null}

In[88]:= PolyominoCoverMatrixFast[{f}, {6, 6}]; // AbsoluteTiming

Out[88]= {0.001137, Null}

And while we are on the topic of simplification, I will pedantically point out that !MemberQ is called FreeQ, and that

In[92]:= IsomorphicGraphQ[PolyominoGraph[ArrayMesh[ConstantArray[1, {5, 5}]]], GridGraph[{6, 6}]]
Out[92]= True

(although I understand while you define emptiness in terms of PolyominoGraph for consistency).

POSTED BY: Victor Kryukov

enter image description here -- you have earned Featured Contributor Badge enter image description here Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you!

POSTED BY: EDITORIAL BOARD
Posted 3 years ago

You might want to learn to use the block quote mechanism, for example:

How do we define units for time? Can we really measure the tile's usefulness by the proportion of puzzle solutions that display the tile?

Just highlight relevant text and push the " icon, or you can manually add ">" at the beginning of each line. It would be helpful to readers if you went back and edited your post to make it more clear what is C+P from mine and what else you added.

Here are answers to your questions, which I will quote again, just to emphasize important functionality of the BB, and how much it improves readability.

How do we define units for time?

Local time coordinates can have units of turns or actions. The conversion is three actions per turn. Since the point of the game is to complete puzzles, the global time coordinate should count down (from a maximum of 52) the number of puzzles remaining to be solved. Although "How many puzzles have been solved" is a valid and useful way to measure in-game time, obviously there is not a linear conversion factor to actions. Solving any one puzzle involves finding a path through the multiway graph of all possible exact covers. Different solutions take different amounts of time.

Can we really measure the tile's usefulness by the proportion of puzzle solutions that display the tile?

Instead of "display" perhaps "include"?

Utility principle: "A tile's utility is (at least partially) a function of how many times it is included in the solutions of a set of puzzles".

Non-utility principle: "A tile has no utility if it can't be used to solve any puzzle".

(In the question asked above, we are only concerned with tetrominos. As these are the most powerful to play, they are generally more useful, especially in the end game. However, they are not useful for small puzzles with fewer than four slots.)

Actual measurement is a more nuanced question, because even if the utility principle is correct, getting an actual number for the utility would involve extra arbitrary choices. Similar for fairness metrics. The end game will not be fair if one player has a much better tile set than the others.

POSTED BY: Brad Klee

Yes, Caveat Emptor! Because we generated so much content more than was really necessary to get the point across, now we're adding more off our list of omissions with insightful results about the Project L Board Game. There's nothing better than getting carried away generating tangentially related exciting content; the purpose of Project L Board Analysis is to provide reproducibility so that we as your summer school students can verify our results and raise an open-ended, evolving solution with Q & A; there's a chance for innovation and originality, like The Story of Spikey. From a mathematical viewpoint the board game "Project L" simply asks players to solve exact cover problems given limited resources and time constraints. The description helps tremendously; Project L is the kind of thing you wanna play with your friends and learn more about because there are so many shapes like the yellow monominoes, green dominoes, orange or blue trominos, and five singly-colored tetrominos. These are called Polyominoes and there are 90 in the picture. Project L Board Analysis can be rephrased as a combinatorial problem within which the full set of tiles is the starting point, that players would build up by the end of the game. For each puzzle, the weight metrics are:

  1. The number of points in the top left corner,
  2. The grid size (or weight) of the polyomino rewards piece,
  3. The grid size of the to-fill-in recessed, white, inner regions.

Since the player with the highest score wins, we can assume that every point value P is in fact an expected time T = P to completion. This assumption is the number of points which is awarded in the top left corner, assuming a linear relationship between complexity and time to completion. W/T' = (T/T') * W/T. With the link between P = T, we can write W/T' = (P/T') * W/P. The player can choose three consecutive actions:

  • A puzzle is drawn from the bank at no cost.
  • A tile of weight N is played into the puzzle.
  • A tile of weight 1 is drawn from the bank.
  • A tile of weight N is traded to the bank for another of max weight N+1.
  • Multiple tiles of total weight X can be played into consecutive puzzles. (This play of X per one time interval is usually the player's maximum power action, since total weight is a summation of up to four individual tile weights (players can have at max four puzzles on board). The expected time is mostly less than or equal to the actual time to completion T', based on the data science that's how most puzzles in the game are balanced.

With Harshal Gajjar's commendable method from the Tiling of Polyominoes and Brad, (thank you so much for writing it is so good that these results are effectively reproducible. Learning more about gTest and mTest is my thing.), let's take a closer look.

mirrorv[tile_] := Module[{line = {}, tMirror = {}},
  tMirror = {};
  For[i = 1, i <= Length[tile], i++,
   line = {};
   For[j = Length[tile[[1]]], j >= 1, j--,
    AppendTo[line, tile[[i, j]]];
    ];
   AppendTo[tMirror, line];
   ];
  tMirror
  ]
rotatec[tile_] := Module[{line = {}, tMirror = {}},
  tMirror = {};
  For[i = 1, i <= Length[tile[[1]]], i++,
   line = {};
   For[j = Length[tile], j >= 1, j--,
    AppendTo[line, tile[[j, i]]];
    ];
   AppendTo[tMirror, line];
   ];
  tMirror
  ]
allTiles[tile_] := Module[{list = {}},
  list = Join[NestList[rotatec, tile, 3],
    NestList[rotatec, mirrorv[tile], 3]];
  list = DeleteDuplicates[list];
  list
  ]
tiles = {
  {{1}},
  {{1, 1}},
  {{1, 0},
   {1, 1}},
  {{1, 1, 1}},
  {{1, 1},
   {1, 1}},
  {{0, 1, 0},
   {1, 1, 1}},
  {{1, 0, 0},
   {1, 1, 1}},
  {{1, 1, 1, 1}},
  {{0, 1, 1}, 
   {1, 1, 0}}
  }
tiles = Flatten[allTiles /@ tiles, 1];
ArrayPlot[#, ImageSize -> 100, Frame -> None, Mesh -> True, 
   ColorRules -> {0 -> Transparent, 
     1 -> 
      Table[
       Part[{Red, Yellow, Green, Purple, Orange, LightBlue, Blue, 
         Pink, LightOrange}, RandomInteger[8] + 1], 9]}] & /@ tiles

all polyominoes

PolyominoGraph[arrayMesh_] := 
 Construct[
  With[{edges = 
      ReplaceAll[MeshPrimitives[#, 1], 
       Line[x_] :> UndirectedEdge @@ Round[x]]}, 
    Graph[edges, 
     VertexCoordinates -> (# -> # & /@ 
        Union[edges /. UndirectedEdge -> Sequence])]] &, arrayMesh]
form = PolyominoGraph[ArrayMesh[tiles[[11]]]]
emptiness = PolyominoGraph[ArrayMesh[ConstantArray[1, {2, 3}]]]

PolyominoCoverMatrix[forms_, emptiness_] := 
 With[{form4Cycles = 
    Map[Sort, 
     FindCycle[#, {4}, All] & /@ 
      Flatten[FindIsomorphicSubgraph[emptiness, #, All] & /@ forms], 
     3], 
   emptiness4Cycles = Map[Sort, FindCycle[emptiness, {4}, All], 2]}, 
  MapThread[
   Join, {IdentityMatrix[Length@form4Cycles], 
    Function[{cycles}, 
      ReplacePart[
       Array[0 &, Length@emptiness4Cycles], # -> 1 & /@ 
        Flatten[Position[emptiness4Cycles, #] & /@ cycles]]] /@ 
     form4Cycles}]]
CoverGraph[coverMatrix_] := 
 ResourceFunction["NestWhileGraph"][
  Function[{state}, 
   Select[
    state + # & /@ coverMatrix, ! MemberQ[#, 2] &]], {ConstantArray[0,
     Last[Dimensions[coverMatrix]]]}, UnsameQ @@ # &, 2]
CoverGraph[PolyominoCoverMatrix[{form}, emptiness]]
CoverGraph[
 PolyominoCoverMatrix[{form}, 
  PolyominoGraph[ArrayMesh[ConstantArray[1, {3, 4}]]]]]

Polyomino Cover Matrix Graph against Constant Array

What are the possible ways that five point puzzles from Project L can be solved, with a relative efficiency greater than one? That is it takes shorter than expected in that we can solve the puzzle faster than the puzzle's point value in the top left corner. How do we define units for time? Can we really measure the tile's usefulness by the proportion of puzzle solutions that display the tile?

The dancing links method allows x to exist in a detached state; it continues to exist on a separate linked list from the one at that we are looking such that we can swap it back in for what's left of right (what's right of left). It's going to be fun. Here's my notebook for the tiling of polyominoes.

f = {{1, 0, 1, 1}, {1, 1, 1, 0}, {0, 0, 1, 1}};
embeds[f_, {n_, m_}] := Flatten[
   Table[
    ArrayPad[
     f,
     {{i - 1, n - i - Length[f]}, {j - 1, m - j - Length[First[f]]}}
     ],
    {i, n - Length[f] + 1},
    {j, m - Length[First[f]] + 1}
    ], 1];
Grid@Partition[
  ArrayPlot[#,
     ImageSize -> 100] & /@
   embeds[f, {5, 5}], 6]
mirrors[tile_] := Reverse[tile, {2}]
rotates[tile_] := Transpose[Reverse[tile]]
tileSet[tile_] := DeleteDuplicates[
  Join[
   NestList[rotates, tile, 3],
   NestList[rotates, mirrors[tile], 3]
   ]
  ]
tiles = {
   {{1}},
   {{1, 1}},
   {{1, 0}, {1, 1}},
   {{1, 1, 1}}, {{1, 1}, {1, 1}},
   {{0, 1, 0}, {1, 1, 1}},
   {{1, 0, 0}, {1, 1, 1}},
   {{1, 1, 1, 1}}, {{0, 1, 1}, {1, 1, 0}}
   };
tiles = Flatten[tileSet /@ tiles, 1];
myColorFunction = ColorData[{"ArmyColors", "Reversed"}];
enhancedArrayPlot[tile_] := ArrayPlot[
   tile,
   Mesh -> True,
   ImageSize -> 50,
   MeshStyle -> Orange,
   ColorRules -> {
     0 -> Transparent,
     1 -> myColorFunction[RandomReal[]]
     }
   ];
Grid[Partition[enhancedArrayPlot /@ tiles, 14], Frame -> All]

Project L Tiles

Project L Grid

I think this really gets to the crux of Project L which is about solving exact cover problems. The idea is, and I finally see the light now, to use polyomino pieces to complete the puzzles within the given set of constraints. For instance let's say I take my fist, and use that as a unit of measurement to build from the simple description of one fist, and then I build upon that. The puzzle gets big, the constraints get big, and the puzzle gets big..nothing else other than the key component of the strategy game known as Project L, where simple mechanics can build up to a sophisticated discourse on resource management, power, and efficiency.

Yes, I know how numpy is the primary foundation of mathematics and computation in Python, whereas in this scenario we're implementing the function of embedding visual representations of the polyominoes and the puzzles, the utility of analyzing piece pseudo-rotations and reflections, and generating possible solutions for tiling puzzles:

rawData = {
   {{1, 0, 0, 1, 1}, {1, 0, 0, 0, 0}, {1, 0, 0, 0, 0}, {0, 0, 0, 0, 
     1}, {1, 1, 0, 0, 1}},
   {{0, 0, 0, 0, 1}, {0, 0, 0, 0, 1}, {1, 0, 0, 0, 0}, {1, 0, 0, 0, 
     0}, {1, 1, 1, 1, 1}},
   {{1, 0, 0, 0, 1}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {1, 0, 0, 0, 
     1}, {1, 1, 1, 1, 1}},
   {{0, 0, 0, 0, 1}, {1, 0, 0, 0, 0}, {1, 0, 0, 0, 0}, {0, 0, 0, 0, 
     1}, {1, 1, 1, 1, 1}},
   {{0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {1, 1, 0, 1, 
     1}, {1, 1, 1, 1, 1}}
   };
VisualizeMatrix[matrix_] := MatrixPlot[matrix,
  Mesh -> All,
  ImageSize -> 100,
  FrameTicks -> None,
  ColorRules -> {1 -> Cyan, 0 -> Pink}
  ]
ShearModal[piece_] := Module[
  {
   dims = Dimensions[piece],
   newArray
   },
  newArray = ArrayPad[
    piece,
    {{0, 1}, {0, 0}}
    ];
  MapIndexed[(#1 /. Rule @@@ Transpose[
        {
         Range[dims[[2]]],
         RotateRight[Range[dims[[2]]], #2[[1]] - 1]
         }
        ]) &, newArray]]
PieceTranslates[piece_, shift_, dims_] := RotateRight[
    PadRight[piece, dims], #] & /@
  Tuples[Range[0, #] & /@ shift]
translation = PieceTranslates[rawData[[1]], {2, 3}, {5, 5}];
Row[{
  Grid[{VisualizeMatrix[#], ArrayPlot[ShearModal[#],
       ColorFunction -> "SunsetColors",
       ImageSize -> 100]} & /@ rawData],
  Grid[Transpose[Partition[MatrixPlot[#,
        ColorFunction -> "LakeColors",
        FrameTicks -> None,
        ImageSize -> 100] & /@
      translation,
     6,
     6,
     {1, 1},
     {}]],
   Frame -> All ]
  }]

Project L Translation

Yeah, there's a balance between point value and the utility of reward pieces throughout the gameplay, that's an inherent feature of the game itself where the utility can be mutable. Do we need it to be immutable? Probably not. If you use your imagination the value of a puzzle can be quantified based on these parameters. But why? That would constrain our understanding of the dynamics of multiplayer games and the structure of space-time, which just foreshadows our ability to adapt faster to the future of work.

Considering how the actions of other players could impact individual strategies, there are further choices that can allow us to freely enumerate, for those of us who see these possible plays (and it really does come down to that) for the controlled mini-game version, of Project L, wherein there's a multiway graph and if you look at it from different angles you sort of kind of see that there's neither beginnings nor ends, to the game's complexity and the optimal strategies that we might be able to identify via the timeless computational tutorials on mathematics or the rough introduction to how game analysis works; how can we go beyond intuition to quantify and strategize gameplay?

PieceRotations[_, _][piece_] := {piece}
PartialCoverRows[allowRotation_, allowReflection_][templateMatrix_, 
  pieceMatrix_] := Module[
  {
   zeroPositionsInTemplate = Position[templateMatrix, 0],
   dimensionsOfTemplate = Dimensions[templateMatrix],
   totalOnesInPiece = Total[pieceMatrix, 2],
   rotationsAndReflectionsOfPiece,
   whichPlacements,
   shiftsForPlacement
   },
  rotationsAndReflectionsOfPiece = 
   PieceRotations[allowRotation, allowReflection][pieceMatrix];
  shiftsForPlacement = Map[
    dimensionsOfTemplate - Dimensions[#] &,
    rotationsAndReflectionsOfPiece
    ];
  whichPlacements = Catenate[
    MapThread[
     If[
       MemberQ[Sign[#2], -1],
       Nothing,
       PieceTranslates[#1, #2, dimensionsOfTemplate]
       ] &,
     {
      rotationsAndReflectionsOfPiece,
      shiftsForPlacement
      }]
    ];
  Select[
   Outer[#1[[Sequence @@ #2]] &, whichPlacements, 
    zeroPositionsInTemplate, 1],
   Total[#] == totalOnesInPiece &]
  ]

originalMatrix = ArrayPad[ConstantArray[1, {2, 2}], 2, 0];
freshColorFunction = Function[
   value,
   If[value == 1,
    ColorData["SunsetColors"][RandomReal[]],
    White]];
limeColor = RGBColor[0.75, 1, 0];
arrayPlotWithFreshColors = ArrayPlot[originalMatrix,
   Mesh -> True,
   Frame -> True,
   ImageSize -> 400,
   FrameTicks -> None,
   ColorFunction -> freshColorFunction,
   MeshStyle -> Directive[limeColor, Dashed]
   ];
MatricesToCoverMatrix[template_,
  pieces_,
  OptionsPattern[{
    "Rotations" -> True,
    "Reflections" -> True,
    "PlaceOne" -> False}]] := If[
  TrueQ[OptionValue["PlaceOne"]],
  Catenate[
   MapThread[
    Function[{piece, ind},
     Join[ind, #] & /@ PartialCoverRows[
        OptionValue["Rotations"],
        OptionValue["Reflections"]][template, piece]],
    {pieces, IdentityMatrix[Length[pieces]]}]],
  Catenate[
   PartialCoverRows[
       OptionValue["Rotations"],
       OptionValue["Reflections"]
       ][template, #] & /@ pieces]]
newMatrix = MatricesToCoverMatrix[
   originalMatrix,
   rawData,
   "PlaceOne" -> True
   ];
navyColor = RGBColor[0, 0, 0.5];
newPlot = ArrayPlot[newMatrix,
   ImageSize -> 400,
   ColorRules -> {1 -> navyColor, 0 -> White}
   ];
{arrayPlotWithFreshColors, newPlot}

Array Plot With Fresh Colors, Project L

When you see things through the lens of exact cover problems, where we've got to use polyominoes - the geometric shapes that could represent things like space weather, or the number of angels that could fit on the needle of a pin, we can fill those specific patterns and puzzles. The Mathematica code can simulate these polyomino pieces so that we can maximize the score wherein the ratio of the puzzle's point value to the actual time taken to solve it, whether we consider each puzzle piece, which doesn't actually exist, to be built up via this metric such that building the polyomino engine shows us how to measure the spending capability of the puzzle, such that we can build upon our backtracking algorithms, heapify them..are we analyzing the game's mechanics deeply enough for our purposes? How can we complete a set of puzzles in the fewest number of turns possible?

basicPentomino = {{1, 1, 1}, {1, 0, 0}, {1, 0, 0}};
maroonColor = RGBColor[0.5, 0, 0];
arrayPlotPentomino = ArrayPlot[basicPentomino,
   Mesh -> True,
   ImageSize -> 200,
   ColorFunction -> "GrayYellowTones",
   MeshStyle -> Directive[maroonColor, Dashed]
   ];
embedPieceInGrid[piece_, gridDimensions_] := Module[
   {
    pieceDimensions = Dimensions[piece],
    emptyGrid = ConstantArray[0, gridDimensions],
    embeddedGrids
    },
   embeddedGrids = ConstantArray[
     emptyGrid,
     gridDimensions - pieceDimensions + {1, 1}
     ];
   Table[
    embeddedGrids[[row, column]][[row ;;
        row + pieceDimensions[[1]] - 1, column ;;
        column + pieceDimensions[[2]] - 1]] = piece,
    {
     row,
     gridDimensions[[1]] - pieceDimensions[[1]] + 1
     },
    {
     column,
     gridDimensions[[2]] - pieceDimensions[[2]] + 1
     }
    ];
   Flatten[embeddedGrids, 1]];
pentominoEmbeddings = embedPieceInGrid[basicPentomino, {5, 5}];
gridOfArrayPlots = Grid@Partition[
    ArrayPlot[#,
       ImageSize -> Tiny,
       ColorFunction -> "GrayYellowTones"
       ] & /@ pentominoEmbeddings, 9];
{arrayPlotPentomino, gridOfArrayPlots}

Grid Of Array Plots, Project L

Just pretend that a seemingly simple board game like Project L can become a rich subject for proposing how we can find all solutions to a given problem more quickly than the current implementations. It's like when you're bubbling up on a binary tree and want to swap places, to put it lightly we can just represent everything as a matrix of 0s and 1s, so when we get to the end it unfolds in such a way that we have this auspicious doubly linked list which serves as the foundation of the toroidal structure that allows us to join the recursive, depth-first, backtracking algorithm mania wherein every node has to be some special link; the links dance in the sense that nodes can be easily re-moved and reinserted, and we can't help but cover and uncover the rows, the dead ends, and the list structure that gives us an exhaustive search where depth means accuracy and the iterative approach is exponentially complex. And vectorization just recursively solves exact cover problems, when I ask questions and do the Sudoku puzzle research.

POSTED BY: Dean Gladish
Posted 3 years ago

Now, Stephen Wolfram has asked to also solve tiling problems in terms of System` function SatisfiabilityInstances, and to that end we already have a couple of WFR's: GenerateTiling and FindMinimalTilings. I used these functions to get some basic results, and they seemed slow to me. Applying the SAT solver to the exact cover problem gives us a benchmark, which then causes us to pause and consider if our methodology in searching tiling space is really reaching optimal times.

In addition to code above, we need another function, which allows input of a cover matrix to the SAT Solver:

OverlapMatrix[covMat_] := With[{len = Length[covMat]},
  Transpose[covMat[[All, len + 1 ;; -1]]]  ]

SATCoverGraph[overlap_] := 
 With[{len = Dimensions[overlap][[2]]}, 
  ResourceFunction["ToDirectedAcyclicGraph"][NearestNeighborGraph[
    SatisfiabilityInstances[And @@ (
        Or[
           BooleanCountingFunction[{1] @@ 
            PadLeft[(x /@ Position[#, 1][[All, 1]]), len, False],
           Nor @@ (x /@ Position[#, 1][[All, 1]])]
          & /@ overlap), x /@ Range[len], All] /. {True -> 1, 
      False -> 0}],
   ConstantArray[0, len],
   GraphLayout -> "LayeredDigraphEmbedding",
   VertexCoordinates -> Automatic,
   AspectRatio -> 1/2]]

As written, the output of SATCoverGraph should be isomorphic to the output of CoverGraph (and the faster computation via augmented MultiwayDeletionsGraph), so we can run a timing test.

The test is similar to what's above, but this time we are trying to fit roto-reflect-translated copies of a chair tromino onto an $n \times 3$ grid.

form = PolyominoGraph[ArrayMesh[{{1, 1}, {1, 0}}]];

times0 = AbsoluteTiming[
     With[{mdgAll = MultiwayDeletionsGraph[
          PairwiseOverlapGraph[PolyominoCoverMatrix[{form},
            PolyominoGraph[ArrayMesh[ConstantArray[1, {#, 3}]]]
            ]],
          {}, "ExactCover"]},
       g0 = 
        EdgeAdd[mdgAll, {} -> # & /@ 
          Select[VertexList[mdgAll], Length[#] == 1 &]]
       ];
     ][[1]] & /@ Range[2, 7]

Out[]= {0.022514, 0.021699, 0.032995, 0.051815, 0.204917, 1.76713}

times1 = AbsoluteTiming[
     g1 = CoverGraph[PolyominoCoverMatrix[{form},
         PolyominoGraph[ArrayMesh[ConstantArray[1, {#, 3}]]]
         ]]; 
     ][[1]] & /@ Range[2, 7]

Out[]= {0.016607, 0.023163, 0.049301, 0.132063, 0.711254, 4.83718}

times2 = AbsoluteTiming[
     g2 = SATCoverGraph[OverlapMatrix[PolyominoCoverMatrix[{form},
          PolyominoGraph[ArrayMesh[ConstantArray[1, {#, 3}]]]
          ]]];
     ][[1]] & /@ Range[2, 7]

Out[]= {0.013869, 0.029674, 0.069643, 0.303353, 2.12466, 28.7976}

IsomorphicGraphQ[g0, g1]
IsomorphicGraphQ[g0, g2]

 Out[28]= True, True

The time statistics do not look good for System`SatsifiabilityInstances:

ListLogPlot[{times0, times1, times2}, PlotRange -> All]

log times

ListPlot[times2/times0]

time ratio

If we adapt the code to try and solve the previous summer school problem (solved minimal graph construct in 2s), what we find is that the SAT solver hangs and can not complete the task! Caveat Emptor!

POSTED BY: Brad Klee
Posted 3 years ago
POSTED BY: Brad Klee