Generative AI and LLMs: Architecture and Data Preparation

This course is part of multiple programs. Learn more

Instructors: Joseph Santarcangelo +1 more

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

What you'll learn

  •   Differentiate between generative AI architectures and models, such as RNNs, Transformers, VAEs, GANs, and Diffusion Models.
  •   Describe how LLMs, such as GPT, BERT, BART, and T5, are used in language processing.
  •   Implement tokenization to preprocess raw textual data using NLP libraries such as NLTK, spaCy, BertTokenizer, and XLNetTokenizer.
  •   Create an NLP data loader using PyTorch to perform tokenization, numericalization, and padding of text data.
  • Skills you'll gain

  •   Data Processing
  •   Jupyter
  •   PyTorch (Machine Learning Library)
  •   Natural Language Processing
  •   Deep Learning
  •   Large Language Modeling
  •   Artificial Intelligence and Machine Learning (AI/ML)
  •   Generative AI
  • There are 2 modules in this course

    You will learn about the types of generative AI and its real-world applications. You will gain the knowledge to differentiate between various generative AI architectures and models, such as Recurrent Neural Networks (RNNs), Transformers, Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs), and Diffusion Models. You will learn the differences in the training approaches used for each model. You will be able to explain the use of LLMs, such as Generative Pre-Trained Transformers (GPT) and Bidirectional Encoder Representations from Transformers (BERT). You will also learn about the tokenization process, tokenization methods, and the use of tokenizers for word-based, character-based, and subword-based tokenization. You will be able to explain how you can use data loaders for training generative AI models and list the PyTorch libraries for preparing and handling data within data loaders. The knowledge acquired will help you use the generative AI libraries in Hugging Face. It will also prepare you to implement tokenization and create an NLP data loader. For this course, a basic knowledge of Python and PyTorch and an awareness of machine learning and neural networks would be an advantage, though not strictly required.

    Data Preparation for LLMs

    Explore more from Machine Learning

    ©2025  ementorhub.com. All rights reserved