Hello,
recently Stanford researchers published a project about Implicit Neural Representations with Periodic Activation Functions. I thought it was interesting and I wanted to replicate it but I dislike python so I gave Mathematica a shot. Turns out it's quite straightforward in this language, and kind of elegant too, IMHO.
I thought this code was worth sharing so here it is :
image=ImageResize[ExampleData[{"TestImage", "Lena"}], 128]
{width, height } = ImageDimensions[image];
output = Flatten[#*2 - 1 &@ImageData[image], 1];
linspace[n_] := Array[# &, n, (n - 1)/2 // {-#, #} &]
input = Pi/2 Tuples[{linspace[height], linspace[width]}] // N;
layer[n_, in_] := LinearLayer[
n,
"Weights" -> RandomReal[{-#, #}, {n, in}],
"Biases" -> RandomReal[{-#, #}, {n}],
"Input" -> in
] &[Sqrt[6/in]]
net = 128 // NetChain[
{
#, Sin,
layer[#, #], Sin,
layer[#, #], Sin,
layer[#, #], Sin,
layer[3, #], Sin
},
"Input" -> 2
] &;
net = NetTrain[net, (input -> output)];
Image[Partition[(# + 1)/2 &[net /@ input], width], ColorSpace -> "RGB"]
Do[
Print[Table[
NetExtract[net, {i, x}] // Flatten //
Histogram, {i, {1, 3, 5, 7, 9}}]],
{x, {"Weights", "Biases"}}
]
Edit: Apparently the Xavier method for net initialization does exactly what the paper does, so I don't even need the "layer" function I had defined for that and the initialization is just :
net = NetInitialize[
width // NetChain[
{
#, Sin,
#, Sin,
#, Sin,
#, Sin,
3, Sin
},
"Input" -> 2
] &,
Method -> "Xavier"
]