Introduction to Transformers in Computer Vision

Explore the fundamentals of applying transformers in computer vision.

After familiarizing ourselves with attention and transformers in NLP to establish a foundation for understanding the attention mechanism, we’re now ready to expand our focus to apply transformers in the context of computer vision (CV), which is the central theme of our course.

Bridging the gap: Self-attention in computer vision

Let's start by planning our path ahead with a simple roadmap to guide us step by step.

  • Self-attention in CV: We’ll explore how self-attention can be applied to the field of CV, recognizing that self-attention forms the core of transformer networks.

  • Generalization of self-attention equations: We’ll also investigate how the principles of attention can be adapted and generalized across different domains beyond just NLP and CV.

Get hands-on with 1400+ tech skills courses.