Group Abstract Group Abstract

Message Boards Message Boards

Discontinuous Deep Learning?

Posted 8 years ago
POSTED BY: Bryan Minor
4 Replies
POSTED BY: Bryan Minor

So after talking with the developers about this, it turns out there's a better explanation.

The learning rate also changes. The learning rate will be relatively high at the start of NetTrain and go down as net train runs. It's likely that when you restart NetTrain, it uses a learning rate that's too high and it messes up the previously learned values.

POSTED BY: Sean Clarke

Thanks Sean! I think you are correct. I appreciate your insight on this issue.

Now off to do more Deep Learning NN models.

POSTED BY: Bryan Minor

From one session to the next the only thing that changes is the previous trained model

There is one other thing I see that does change - the validation set. Each training session will have a new validation set and some new training data. Each time you run NetTrain 20% of your data is randomly selected to be the validation set. So there's new data in each training session.

Under these conditions, I would expect somewhat of a zigzag pattern. I'm not sure however if that justifies what we see in your example.

Or worse, maybe each round is somewhat ..overfitting.. to the validation set? I'm not sure what the correct word choice is there.

POSTED BY: Sean Clarke
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard