Basics of Graphs
Understand the basic concepts of graphs, how they can be represented, and their formulation.
We'll cover the following...
In this lesson, we will explore graph neural networks and graph convolutions. Graphs are a super general representation of data with intrinsic structure.
The most intuitive transition to graphs is by starting from images.
Why?
Because images are highly-structured data. Their components (pixels) are arranged in a meaningful way. If you change the way pixels are structured, the image loses its meaning. Additionally, images have a very strong notion of locality.
As you can see, the pixels are arranged in a grid, which is the structure of the image. Since structure matters, it makes sense to design filters to group representation from a neighborhood of pixels. These filters are convolutions.
The pixels also have one (grayscale) or more intensity channels. In a general form, each pixel has a vector of features that describes it. Thus, the channel intensities could be regarded as the signal of the image.
The reason that we don’t think of images in a pre-described way is that the representation of structure and signal (features) are merged together.
The key concept to understand graphs is the decomposition of structure and signal (features) that make them so powerful.
Decomposing features (signal) and structure
As we saw in the lesson on Transformers, natural language can also be decomposed to signal and structure. The structure is the order of words, which implies syntax and grammar context. Here is an illustrative example:
The features will now be a set of word embeddings, and the order will be encoded in the positional embeddings.
Graphs are not any different: they are data with decomposed ...