Revised for PyTorch 2.x!
Why this book?
Are you looking for a book where you can learn about deep learning and PyTorch without having to spend hours deciphering cryptic text and code? A technical book that's also easy and enjoyable to read?
This is it!
How is this book different?
- First, this book presents an easy-to-follow, structured, incremental, and from-first-principles approach to learning PyTorch.
- Second, this is a rather informal book: It is written as if you, the reader, were having a conversation with Daniel, the author.
- His job is to make you understand the topic well, so he avoids fancy mathematical notation as much as possible and spells everything out in plain English.
What will I learn?
In this third volume of the series, you'll be introduced to all things sequence-related: recurrent neural networks and their variations, sequence-to-sequence models, attention, self-attention, and Transformers.
This volume also includes a crash course on natural language processing (NLP), from the basics of word tokenization all the way up to fine-tuning large models (BERT and GPT-2) using the Hugging Face library.
By the time you finish this book, you'll have a thorough understanding of the concepts and tools necessary to start developing, training, and fine-tuning language models using PyTorch.
This volume is more demanding than the other two, and you're going to enjoy it more if you already have a solid understanding of deep learning models.
What's Inside
- Recurrent neural networks (RNN, GRU, and LSTM) and 1D convolutions
- Seq2Seq models, attention, masks, and positional encoding
- Transformers, layer normalization, and the Vision Transformer (ViT)
- BERT, GPT-2, word embeddings, and the HuggingFace library
Share This eBook: