Graph Convolution

Learn how to use graph convolution in neural networks for 3D meshes.

Overview

While we have many options for data representations in 3D, it would often be ideal to work with meshes directly. Much of the 3D art world depends on 3D meshes, so AI techniques that can manipulate them directly will be simpler and more efficient. By treating 3D meshes as graph data, we can use some of the concepts from graph neural networks on 3D data.

Graph convolution

Graph convolution is a concept drawn from graph theory. For a graph consisting of nodes NN and edges EE, it defines a convolution operator f(N,E)f(N, E) that combines vertex position and optional features from a vertex ii and its set of neighbors N(i)N(i). It essentially is a generalization of convolutional layers where connections can be irregular and/or asymmetric. We can think of the convolutional layers seen in computer vision, for example, as a special case of graph convolution, where nodes are densely and bidirectionally connected to adjacent nodes.

Get hands-on with 1400+ tech skills courses.