We are almost to the contest, but first, another optimization to one dimension. The idea is to use a 1D Autoglyph function as a PRNG, and it seems to work well on the cloud. Instead of all the complications with bitmaps, we have a simple
$4$ character unicode message such as:
message = {0, 0, 0, 0, 1, 0, 3, 2, 0, 0, 0, 0, 1, 0, 3, 3, 0, 0, 0, 0,
3, 2, 2, 3, 0, 0, 0, 0, 1, 0, 3, 0};
We will pad this message with Autoglyphs, as before, but first let's look at stats. Importing 256 secret keys and plotting average by element:
ListLinePlot[Mean[AutoGlyphM1D[#] & /@ SecretKeys], PlotRange -> {0, 2.5}]

Shows that Autoglyphs are sparse as pads, because
$(0+1+2+3+4)/5=2$. Sparseness apparently is averaged out over time, because stats of encodes look pretty well randomized:
Image[Transpose[encodes /. RGBRep], ImageSize -> 512]
Show[Plot[256/5, {x, -1, 5}, AxesOrigin -> {-1, 0}],
ListPlot[Tally /@ Transpose[encodes], PlotRange -> {0, 256 2/5}],
ImageSize -> 512]

(granted: this is just equal distribution by element)
So now we refer back to the minimal working example on the cloud as giving a data set:
$256$ unique encodings of one particularly well-loved holiday message. Can anyone decode without using the (known) secret keys or is the answer "NOëL"?