Message Boards Message Boards

2
|
1250 Views
|
5 Replies
|
4 Total Likes
View groups...
Share
Share this post:

FunctionLayer error compilerr: Cannot interpret as a network

Posted 4 months ago

Hi,

I have a dataset of 192 binary inputs (0/1) and corresponding outputs. I have compressed the inputs to 24 8-bit integers, so that the entire dataset can fit into RAM which greatly speeds up training of the neural network. Obviously, before a batch is trained the 24 8-bit integers have to be expanded to 192 integers. I managed to figure out how to do this in Tensorflow using the 'map' functionality of datasets, and now I want to do the same in Mathematica. I found FunctionLayer, but I am getting the error 'FunctionLayer::compilerr: Cannot interpret map[#1] & as a network.' on the following code:

uncomp[x_] := IntegerDigits[x, 2, 8]
map[x_] := Flatten[Map[uncomp, x]]
net = NetChain[{FunctionLayer[map], 
LinearLayer[256], ElementwiseLayer["ReLU"], LinearLayer[16], 
ElementwiseLayer["ReLU"], LinearLayer[16], 1}]

Why am I getting this error?

Thanks, GW

5 Replies

I looks like you don't have any trainable parameter in you input encoding and that part is at the beginning of the net. You can simply move it in a custom Function encoder.

This is a version doing that (and also showcasing some shortcuts for specifiying layers in a NetChain)

net = NetInitialize @ NetChain[
	{256, Ramp, 16, Ramp, 16, {}},
	"Input" -> NetEncoder[{"Function", IntegerDigits[#, 2, 8]&, {8}}],
	"Output" -> "Real"
]
net[1]
(* -0.0137619 *)

net[{1, 2, 3}]
(* {-0.0137619, -0.0934901, -0.0568687} *)

The final empty list is a length zero array that represent a scalar output which I assumed is what you were looking for with the final 1.

Thank you! I modified your function to support the 24 8-bit integers expanding to 192 integers as follows:

NetEncoder[{"Function", Flatten[IntegerDigits[#, 2, 8]] &, {192}}]

and the following commands indeed produce the same output:

inp = Table[RandomInteger[{0, 255}], 24];
enc = NetEncoder[{"Function", Flatten[IntegerDigits[#, 2, 8]] &, {192}}];
Flatten[IntegerDigits[inp, 2, 8]]
enc[inp]

Regards, GW

Posted 4 months ago

I’m new to Mathematica so I’m a little bit guessing here, but for deep neural networks to work all parts of the network needs to be differentiable. Maybe the functional layer cannot support any code and therefore cannot accept map.

POSTED BY: t g

I have been trying to modify the code so that it does not raise an error. The folllowing code does NOT raise the error:

net = NetChain[{FunctionLayer[Mean], LinearLayer[256], 
   ElementwiseLayer["ReLU"], LinearLayer[16], 
   ElementwiseLayer["ReLU"], LinearLayer[16], 1}]

but the following code DOES:

map[x_] := Mean[x]
net = NetChain[{FunctionLayer[map], LinearLayer[256], 
   ElementwiseLayer["ReLU"], LinearLayer[16], 
   ElementwiseLayer["ReLU"], LinearLayer[16], 1}]

Why?

Regards, GW

The issue is with the compilation step in FunctionLayert not being able to read the downvalues for the map symbol in order to inline that in the net.

This is an usupported feature at the moment.

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract