Generative AI Engineering and Fine-Tuning Transformers

This course is part of multiple programs. Learn more

Instructors: Joseph Santarcangelo +2 more

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

What you'll learn

  •   Sought-after job-ready skills businesses need for working with transformer-based LLMs for generative AI engineering... in just 1 week.
  •   How to perform parameter-efficient fine-tuning (PEFT) using LoRA and QLoRA
  •   How to use pretrained transformers for language tasks and fine-tune them for specific tasks.
  •   How to load models and their inferences and train models with Hugging Face.
  • Skills you'll gain

  •   PyTorch (Machine Learning Library)
  •   Applied Machine Learning
  •   Prompt Engineering
  •   Large Language Modeling
  •   Performance Tuning
  •   Application Frameworks
  •   Natural Language Processing
  •   Generative AI
  • There are 2 modules in this course

    During this course, you’ll explore transformers, model frameworks, and platforms such as Hugging Face and PyTorch. You’ll begin with a general framework for optimizing LLMs and quickly move on to fine-tuning generative AI models. Plus, you’ll learn about parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized low-rank adaptation (QLoRA), and prompting. Additionally, you’ll get valuable hands-on experience in online labs that you can talk about in interviews, including loading, pretraining, and fine-tuning models with Hugging Face and PyTorch. If you’re keen to take your AI career to the next level and boost your resume with in-demand gen AI competencies that catch the eye of an employer, ENROLL today and have job-ready skills you can use straight away within a week!

    Parameter Efficient Fine-Tuning (PEFT)

    Explore more from Machine Learning

    ©2025  ementorhub.com. All rights reserved