Message Boards Message Boards


Add a customized complex loss function to a neural network?

Posted 17 days ago
3 Replies
0 Total Likes


My general question is how can I add a customized and complex loss function to neural network?

I have to address this issue ASAP, because I did not find any good examples that can do it. In the loss function documentation there is a short and simple example, but I did not succeed to expand it to more complex loss function and to implement a code that do what I want. I think I'm missing something, and I wish for a well organized tutorial for this purpose, if someone can refer to me, please.

I have one example, but I do not want to limit the answers only for this example.

So, if I have a 1D signal as an input (x), and I trained a net to get a similar 1D signal as the output (y). The network is using the default loss function (e.g. CrossEntropyLossLayer). I want to add an expression to the loss function (and sill use the CrossEntropyLossLayer in addition to it) , so it will minimize the values between two succeeding samples. For examples something like:

n ‖x(n+1)-x(n) ‖

This loss does not deal with y, only with x.

How can I do it ? What is the right way to customize the loss function ?

I hope someone can help me.

Thank you very much!

3 Replies

You can add as many loss layers layers as you want and by specifying the LossFunction in NetTrain you can say which Outputs of your network are the loss layers to use. For building a loss function you can just used the default net building blocks as you can see in the example below.

In this case NetTrain[ , LossFunction->{"Loss1","Loss2","Loss3"}]. As far as i understand anything will work as long as the output of your loss net is a Real value.

custom loss layers

Posted 12 days ago

Thank you very much, Martijn. Can I define for each loss layer different "Target" and "Input" ports ? Where to do it ?

Posted 10 days ago

Let's try with more concrete example:

I created the following net:

enter image description here

It gets a 1X1X512 signal as "Input", and produces some manipulation through "unet" to achieve "Output", which is the desired signal (which resemble to the input signal), by calculating the standard loss.

In addition, "Output" enters to "smooth" layer to calculate the "smoothLoss".

However - the "Target" port for the two losses should be different: for the "output" calculation the target is as the input, and for the "smoothLoss" the target should be zero. I tried to implemented it as:

testTrain = NetTrain[fullNet, <|"Input" -> inputDataNorm, "Output" -> inputDataNorm, "smoothLoss" -> ConstantArray[0, Length[inputDataNorm]]|>, LossFunction -> {"Output", "smoothLoss"}]

But I got messy results, and I'm sure somthing is wrong with that code.

Can someone help me please ?

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
or Discard

Group Abstract Group Abstract