Clarification about tutorial 6

Newbie here so maybe this is a silly question. Going through the tutorials, I was having difficulty in understanding this section in tutorial 6 (link).

Question is, why is inputs[1:] added to gp1, gc2, gp2, and readout. The tutorial doesn’t explain it so maybe it’s evident, but I’m not clear on this.

.

class MyGraphConvModel(tf.keras.Model):

  def __init__(self):
    super(MyGraphConvModel, self).__init__()
    self.gc1 = GraphConv(128, activation_fn=tf.nn.tanh)
    self.batch_norm1 = layers.BatchNormalization()
    self.gp1 = GraphPool()

    self.gc2 = GraphConv(128, activation_fn=tf.nn.tanh)
    self.batch_norm2 = layers.BatchNormalization()
    self.gp2 = GraphPool()

    self.dense1 = layers.Dense(256, activation=tf.nn.tanh)
    self.batch_norm3 = layers.BatchNormalization()
    self.readout = GraphGather(batch_size=batch_size, activation_fn=tf.nn.tanh)

    self.dense2 = layers.Dense(n_tasks*2)
    self.logits = layers.Reshape((n_tasks, 2))
    self.softmax = layers.Softmax()

  def call(self, inputs):
    gc1_output = self.gc1(inputs)
    batch_norm1_output = self.batch_norm1(gc1_output)
    gp1_output = self.gp1([batch_norm1_output] + inputs[1:])

    gc2_output = self.gc2([gp1_output] + inputs[1:])
    batch_norm2_output = self.batch_norm1(gc2_output)
    gp2_output = self.gp2([batch_norm2_output] + inputs[1:])

    dense1_output = self.dense1(gp2_output)
    batch_norm3_output = self.batch_norm3(dense1_output)
    readout_output = self.readout([batch_norm3_output] + inputs[1:])

    logits_output = self.logits(self.dense2(readout_output))
    return self.softmax(logits_output)

Thanks

IIRC, all the graph conv layers have several inputs (things like adjacency matrices, bond types and so on) that need to be threaded through the calls. These are aspects of the molecular graph that are shared from layer to layer so they are just copied forward. I am answering offhand though, so I may not be 100% correct

how do you use adjacency matrices as input, I have tried using a list of tensors with feature and adjacency matrices as input for the graph conv layer but I am encountering errors relating to dtype and such