# Can Diagonal[] be used in Compile ?

Posted 1 month ago
300 Views
|
9 Replies
|
5 Total Likes
|
 Can the Diagonal[] function (which returns the diagonal of a matrix) be used in a Compile function ?From the error message in the code below, it seems that the Compile thinks the Diagonal function returns a table of Depth 2, and not of depth 1 (as it does really). Is this a bug, or am I misunderstanding something ? Thank you for your help, DiagCompile = Compile[{{statearray, _Integer, 2}}, Module[{res = {0, 0, 0, 0}}, res = Diagonal[statearray]; res]] During evaluation of In:= Compile::cset: Variable res of type {_Integer,1} encountered in assignment of type {_Integer,2}. During evaluation of In:= Compile::cset: Variable res of type {_Integer,1} encountered in assignment of type {_Integer,2}. Answer
9 Replies
Sort By:
Posted 1 month ago
 Hi Thibaut,I guess you have to define the return type of Diagonal: diagCompile = Compile[{{statearray, _Integer, 2}}, Module[{res = {0, 0, 0, 0}}, res = Diagonal[statearray]; res], {{Diagonal[_], _Integer, 1}}] Regards -- Henrik Answer
Posted 1 month ago
 Unfortunately, it is still not compiled: In:= CompilePrint[diagCompile] Out= " 1 argument 4 Tensor registers Underflow checking off Overflow checking off Integer overflow checking on RuntimeAttributes -> {} T(I2)0 = A1 T(I1)1 = {0, 0, 0, 0} Result = T(I1)2 1 T(I1)2 = CopyTensor[ T(I1)1]] 2 T(I1)3 = MainEvaluate[ Hold[Diagonal][ T(I2)0]] 3 T(I1)2 = CopyTensor[ T(I1)3]] 4 Return " However, Diagonal is so fast that it will still evaluate faster this way than a direct re-implementation. In:= cf = Compile[{{mat, _Integer, 2}}, Block[{diag}, diag = Table[mat[[{i, i}]], {i, Length[mat]}] ] ]; In:= cf2 = Compile[{{mat, _Integer, 2}}, Block[{diag}, diag = Diagonal[mat] ], {{Diagonal[_], _Integer, 1}} ]; In:= m = RandomInteger[100, {2000, 2000}]; In:= Diagonal[m]; // RepeatedTiming Out= {9.0*10^-6, Null} In:= cf[m]; // RepeatedTiming Out= {0.030, Null} In:= cf2[m]; // RepeatedTiming Out= {0.0071, Null} In:= m = RandomInteger[100, {200, 200}]; In:= Diagonal[m]; // RepeatedTiming Out= {9.*10^-7, Null} In:= cf[m]; // RepeatedTiming Out= {0.00009, Null} In:= cf2[m]; // RepeatedTiming Out= {0.0000152, Null} However, as you can see, not compiling it will be by far the fastest. Given that Diagonal works with packed arrays, it seems like an oversight that it is not compilable ... What bothers me now is that if we declare that {Diagonal[_], _Integer, 1}, then can we still work with diagonals of both integer and real matrices in this compiled function? Is there a better solution? Answer
Posted 1 month ago
 This compiled implementation seems competitive in speed with a MainEvaluate to Diagonal: cf3 = Compile[{{mat, _Integer, 2}}, Block[{diag}, diag = Flatten[mat][[Range[1, Length[mat]^2, Length[mat] + 1]]] ] ] Answer
Posted 1 month ago
 To be complete : my question came up while writing code to play gomoku (game where you need to aling 5 stones on a go board), using MonteCarlo Tree search algorithm. The most time-consuming part in my implementation is the function which checks if the considered move is a win : I need to consider the possibility to have 5 aligned stones on a line, column, or diagonal including the current played stone.Eventually, I didn't use the diagonal function but took directly parts of the array. Using a compiled function gave me a susbtantial speed-up compared to my previous, non-compiled, function, but maybe I am still far from optimal... Here is the code I am currently using: CheckWin2Comp = Compile[{{statearray, _Integer, 2}, {posx, _Integer}, {posy, _Integer}, {player, _Integer}}, Module[{linec = {1, 1}, colc = {1, 1}, diag1c = {1, 1, 11}, diag2c = {1, 1}, arraysize = Length[statearray], candidatesline = {0}, candidatescol = {0}, candidatesdiag1 = {0},candidatesdiag2 = {0}}, linec = statearray[[posx, Max[posy - 4, 1] ;; Min[posy + 4, arraysize]]]; If[Count[linec, player] > 4, candidatesline = Total /@ Partition[linec, 5, 1], candidatesline = {0}]; If[MemberQ[candidatesline, 5 player], Return[player]]; colc = Transpose[statearray][[posy, Max[posx - 4, 1] ;; Min[posx + 4, arraysize]]]; If[Count[colc, player] > 4, candidatescol = Total /@ Partition[colc, 5, 1], candidatescol = {0}]; If[MemberQ[candidatescol, 5 player], Return[player]]; diag1c = Table[statearray[[posx - i, posy - i]], {i, Max[-4, Max[posx - arraysize, posy - arraysize]], Min[4, Min[posx, posy] - 1]}]; candidatesdiag1 = Total /@ Partition[diag1c, 5, 1]; If[MemberQ[candidatesdiag1 , 5 player], Return[player]]; diag2c = Table[statearray[[posx - i, posy + i]], {i, Max[-4, Max[posx - arraysize, 1 - posy]], Min[4, Min[posx, arraysize - posy + 1] - 1]}]; candidatesdiag2 = Total /@ Partition[diag2c, 5, 1]; If[MemberQ[candidatesdiag2 , 5 player], Return[player]]; Return], CompilationTarget -> "C", RuntimeAttributes -> {Listable}, Parallelization -> True] Answer
Posted 1 month ago
 Hi Szabolcs,thank you for this - as always - interesting contribution! In my "solution" I regarded this as a minimal example showing the problem and so was caring about syntax, not about efficiency. I am assuming that virtually all WL-functions are compiled; compiling a compiled function does not make much sense! But my primary question: What is CompilePrint[]? Is it some new function of MM v12.0 ?Regards -- Henrik Answer
Posted 1 month ago
 Actually CompilePrint is very old. I forgot to mention that you need to < Answer
Posted 1 month ago
 Wow, Szabolcs, very interesting and helpful! Thank you very much again! Answer
Posted 1 month ago
 Excellent, thanks a lot !Thibaut Answer
Posted 13 days ago
 Even if Diagonal[] is not compilable, one can use Transpose[] instead in a compiled function to extract the diagonal: cf4 = Compile[{{mat, _Integer, 2}}, Transpose[mat, {1, 1}]]; (Using CompilePrint[] shows that the function has compiled successfully without a MainEvaluate call.)Comparing this with @Szabolcs's examples, it is a bit faster than the other compiled functions, but still slower than an actual call to Diagonal[]`. Answer
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.