Message Boards Message Boards

2
|
2368 Views
|
7 Replies
|
4 Total Likes
View groups...
Share
Share this post:

FunctionLayer error compilerr: Cannot interpret as a network

Posted 9 months ago

Hi,

I have a dataset of 192 binary inputs (0/1) and corresponding outputs. I have compressed the inputs to 24 8-bit integers, so that the entire dataset can fit into RAM which greatly speeds up training of the neural network. Obviously, before a batch is trained the 24 8-bit integers have to be expanded to 192 integers. I managed to figure out how to do this in Tensorflow using the 'map' functionality of datasets, and now I want to do the same in Mathematica. I found FunctionLayer, but I am getting the error 'FunctionLayer::compilerr: Cannot interpret map[#1] & as a network.' on the following code:

uncomp[x_] := IntegerDigits[x, 2, 8]
map[x_] := Flatten[Map[uncomp, x]]
net = NetChain[{FunctionLayer[map], 
LinearLayer[256], ElementwiseLayer["ReLU"], LinearLayer[16], 
ElementwiseLayer["ReLU"], LinearLayer[16], 1}]

Why am I getting this error?

Thanks, GW

7 Replies

It's a bit hidden but you can use the same spec as the nets and layers (e.g. NetChain).

NetEncoder[{"Function", Flatten[IntegerDigits[#, 2, 8]] &, "Varying"}]

Depending on your application, you can also avoid flattening in the encoder

NetEncoder[{"Function", IntegerDigits[#, 2, 8] &, {"Varying", 8}}]

I have a follow-up question on the NetEncoder "Function" syntax. In the line

enc = NetEncoder[{"Function", Flatten[IntegerDigits[#, 2, 8]] &, {192}}]

the parameter {192} seems to indicate that the output will be an integer vector of length 192. How do I specify to expect an output integer vector of variable length? I can't find documentation on this. Thank you, Bill

POSTED BY: William Butler

Thank you! I modified your function to support the 24 8-bit integers expanding to 192 integers as follows:

NetEncoder[{"Function", Flatten[IntegerDigits[#, 2, 8]] &, {192}}]

and the following commands indeed produce the same output:

inp = Table[RandomInteger[{0, 255}], 24];
enc = NetEncoder[{"Function", Flatten[IntegerDigits[#, 2, 8]] &, {192}}];
Flatten[IntegerDigits[inp, 2, 8]]
enc[inp]

Regards, GW

I looks like you don't have any trainable parameter in you input encoding and that part is at the beginning of the net. You can simply move it in a custom Function encoder.

This is a version doing that (and also showcasing some shortcuts for specifiying layers in a NetChain)

net = NetInitialize @ NetChain[
	{256, Ramp, 16, Ramp, 16, {}},
	"Input" -> NetEncoder[{"Function", IntegerDigits[#, 2, 8]&, {8}}],
	"Output" -> "Real"
]
net[1]
(* -0.0137619 *)

net[{1, 2, 3}]
(* {-0.0137619, -0.0934901, -0.0568687} *)

The final empty list is a length zero array that represent a scalar output which I assumed is what you were looking for with the final 1.

The issue is with the compilation step in FunctionLayert not being able to read the downvalues for the map symbol in order to inline that in the net.

This is an usupported feature at the moment.

Posted 9 months ago

I’m new to Mathematica so I’m a little bit guessing here, but for deep neural networks to work all parts of the network needs to be differentiable. Maybe the functional layer cannot support any code and therefore cannot accept map.

POSTED BY: t g

I have been trying to modify the code so that it does not raise an error. The folllowing code does NOT raise the error:

net = NetChain[{FunctionLayer[Mean], LinearLayer[256], 
   ElementwiseLayer["ReLU"], LinearLayer[16], 
   ElementwiseLayer["ReLU"], LinearLayer[16], 1}]

but the following code DOES:

map[x_] := Mean[x]
net = NetChain[{FunctionLayer[map], LinearLayer[256], 
   ElementwiseLayer["ReLU"], LinearLayer[16], 
   ElementwiseLayer["ReLU"], LinearLayer[16], 1}]

Why?

Regards, GW

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract