Message Boards Message Boards

Cellular Automaton Neural Network Classification

POSTED BY: Thales Fernandes
5 Replies

Thales, do you think your approach might give some results for the partitions example with low modulus at Successive Differences of Sequences:

http://demonstrations.wolfram.com/SuccessiveDifferencesOfSequences

Those certainly look like CAs. - George

POSTED BY: George Beck

One of the main reasons why this approach works well with CA (the collapsing of all spatial information) is because there is an deep underline structure with CA, each cell is locally dependent on the cells above. So, in a sense, the convolution is being force to learn the this underline structure. My ideia of using a Pooling layer for the colapse of the spatial information is to get a operation similar to DeleteDuplicates.

The patterns shown in link provided seems to fall this locally category, at least for low modulus as you suggest. Perhaps a bigger convolution of cascades of thereof followed by all-spatial pooling can lead it to learn the underline structure of the rule used to generate it. It might be worth investigating this approach in others systems.

POSTED BY: Thales Fernandes

enter image description here - Congratulations! This post is now a Staff Pick as distinguished by a badge on your profile! Thank you, keep it coming!

POSTED BY: EDITORIAL BOARD

Thales, what a neat project, thanks for sharing! Some observations:

  • There is a little typo in function definition, I think. net[W_Integer, H_W_Integer] should have simply H_Integer perhaps.

  • Did you try option ValidationSet -> Scaled[0.1] for NetTrain? I wonder if that improves anything.

I don't have GPU, too long to check. BTW what kind of GPU did you use and how long did it take to train?

POSTED BY: Vitaliy Kaurov

Thanks Vitaliy for the typo correction.

Did you try option ValidationSet -> Scaled[0.1] for NetTrain? I wonder if that improves anything.

Since each time the code is run the data is generated, there is no need for a Validation Set, but it could be set just to verify the convergence.

I don't have GPU, too long to check. BTW what kind of GPU did you use and how long did it take to train?

I used a GTX 1050 and it took me around 5 min to train. The fast time is due to the compact size of the NN. Which could be further compacted for efficiency too.

POSTED BY: Thales Fernandes
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract