Generative AI Engineering and Fine-Tuning Transformers
This course is part of multiple programs. Learn more
Instructors: Joseph Santarcangelo +2 more
Instructor ratings
We asked all learners to give feedback on our instructors based on the quality of their teaching style.
What you'll learn
Skills you'll gain
There are 2 modules in this course
During this course, you’ll explore transformers, model frameworks, and platforms such as Hugging Face and PyTorch. You’ll begin with a general framework for optimizing LLMs and quickly move on to fine-tuning generative AI models. Plus, you’ll learn about parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized low-rank adaptation (QLoRA), and prompting. Additionally, you’ll get valuable hands-on experience in online labs that you can talk about in interviews, including loading, pretraining, and fine-tuning models with Hugging Face and PyTorch. If you’re keen to take your AI career to the next level and boost your resume with in-demand gen AI competencies that catch the eye of an employer, ENROLL today and have job-ready skills you can use straight away within a week!
Parameter Efficient Fine-Tuning (PEFT)
Explore more from Machine Learning
©2025 ementorhub.com. All rights reserved