Transformers
Course module.
Module Contents
1. Self-Attention
2. Transformer Architecture
3. Pre-Training
Module Chapters
Chapter 01
Self-Attention
Master the core mechanism of Transformers. Interactive visualization of Query, Key, Value attention.
Start Learning
Chapter 02
Transformer Architecture
Deep dive into the Transformer architecture: Multi-Head Attention, Positional Encodings, and Encoder-Decoder blocks.
Start Learning
Chapter 03
Pre-Training
Learn how Transformers are pre-trained on massive datasets. BERT vs GPT vs T5 objectives.
Start Learning
Chapter 04
Module Review: Transformers
Module Review: Transformers
Start Learning