Implementation of a GCN

Develop and train a Graph Convolutional Network (GCN) in Pytorch.

Implementing a 1-hop GCN layer in Pytorch

For this tutorial, we will train a simple 1-hop GCN layer in a small graph dataset. We will use the open-source graph data (you can find the link in the appendix) from the University of Dortmund. In particular, we will play with the MUTAG dataset because it is small enough to train something to solidify your understanding.

MUTAG dataset characteristics

Each node contains a label from 0 to 6 which will be used as a one-hot-encoding feature vector. From the 188 graphs nodes, we will use 150 for training and the rest for validation. Finally, we have two classes.

The goal is to demonstrate that graph neural networks are a great fit for such data. You can find the data-loading part as well as the training loop code in the notebook. Here, they are omitted them for clarity. You are instead shown the result in terms of accuracy.

But first things first. As an exercise, you will need to build a simple Graph Convolutional Layer that we will incorporate into our network.

Our GCN layer will be defined by the following equations:

Y=LnormXW{Y} = {L}_{norm} {X} {W} ...