I am trying to translate an example code for graph convolution network from Deep Learning for Physical Scientists: Accelerating Research with Machine Learning. Edward O. Pyzer-Knapp et. al. They have an example (Chapter 6, Section 6.3, Page 104-106) where they build a graph convolution network using Keras.
The model takes a 132*132 adjacency matrix built out of molecule connectivity, with a flag of 1 or 0 for active or inactive molecule.
from tensorflow.keras import datasets, layers, models
from sklearn.model_selection import train_test_split
model = models.Sequential()
model.add(layers.Conv2D(64, (3, 3), activation='relu', input_shape=(None, None, 1), padding='SAME'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(32, (3, 3), activation='relu'))
model.add(layers.GlobalMaxPooling2D())
model.add(layers.Dense(2, activation='softmax'))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
I would like to build this model in WL. My attempt thus far is
net = NetChain[{ConvolutionLayer[64, {3, 3}],
ElementwiseLayer["ReLU"], PoolingLayer[{2, 2}, "Function" -> Max],
ConvolutionLayer[32, {2, 2}],
PoolingLayer[{2, 2}, "Function" -> Max], LinearLayer[2],
SoftmaxLayer[]}]
I can't find the GlobalMaxPooling option, and I am uncertain if my convolution layers are set up correctly. I appreciate if anyone could show how this could be done.
Thank you,