Message Boards Message Boards

Game of Life (Manual) Neural Network

POSTED BY: Thales Fernandes
3 Replies

Interesting idea and cool project to understand the inner workings of NNs! But could you explain a bit better the reversibility idea please? Isn't it true that random data generation can encounter an ill-formed training set containing

{a->b, a->c, ...}

where "a" is the input next state and "b" and "c" are predicted (and both possible) previous steps? And a net cannot be properly trained with such set?

POSTED BY: Sam Carrettie

Indeed Sam, your point is valid.

My approach was a probabilistic one. Get the probability of the previous input being 1. For this I need to generate all possible outputs and get the mean, this would be the probability. This way we could get a sense of what are the possible predecessors of a given configuration.

But I wasn't able to properly train it, not as fast as the previous approach. And, I believe, there must be an easy solution that can be trained pretty quickly.

POSTED BY: Thales Fernandes

enter image description here - Congratulations! This post is now a Staff Pick as distinguished by a badge on your profile! Thank you, keep it coming, and consider contributing your work to the The Notebook Archive!

POSTED BY: Moderation Team
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract