Generative AI and LLMs: Architecture and Data Preparation
This course is part of multiple programs. Learn more
Instructors: Joseph Santarcangelo +1 more
Instructor ratings
We asked all learners to give feedback on our instructors based on the quality of their teaching style.
What you'll learn
Skills you'll gain
There are 2 modules in this course
You will learn about the types of generative AI and its real-world applications. You will gain the knowledge to differentiate between various generative AI architectures and models, such as Recurrent Neural Networks (RNNs), Transformers, Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs), and Diffusion Models. You will learn the differences in the training approaches used for each model. You will be able to explain the use of LLMs, such as Generative Pre-Trained Transformers (GPT) and Bidirectional Encoder Representations from Transformers (BERT). You will also learn about the tokenization process, tokenization methods, and the use of tokenizers for word-based, character-based, and subword-based tokenization. You will be able to explain how you can use data loaders for training generative AI models and list the PyTorch libraries for preparing and handling data within data loaders. The knowledge acquired will help you use the generative AI libraries in Hugging Face. It will also prepare you to implement tokenization and create an NLP data loader. For this course, a basic knowledge of Python and PyTorch and an awareness of machine learning and neural networks would be an advantage, though not strictly required.
Data Preparation for LLMs
Explore more from Machine Learning
©2025 ementorhub.com. All rights reserved