Natural Language Processing with Probabilistic Models

This course is part of Natural Language Processing Specialization

Instructors: Younes Bensouda Mourri +2 more

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

What you'll learn

  •   Use dynamic programming, hidden Markov models, and word embeddings to implement autocorrect, autocomplete & identify part-of-speech tags for words.
  •   
  • Skills you'll gain

  •   Data Cleansing
  •   Natural Language Processing
  •   Probability & Statistics
  •   Artificial Intelligence and Machine Learning (AI/ML)
  •   Algorithms
  •   Machine Learning Methods
  •   Markov Model
  •   Data Processing
  •   Text Mining
  •   Artificial Neural Networks
  • There are 4 modules in this course

    a) Create a simple auto-correct algorithm using minimum edit distance and dynamic programming, b) Apply the Viterbi Algorithm for part-of-speech (POS) tagging, which is vital for computational linguistics, c) Write a better auto-complete algorithm using an N-gram language model, and d) Write your own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model. By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text. This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Ɓukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.

    Part of Speech Tagging and Hidden Markov Models

    Autocomplete and Language Models

    Word embeddings with neural networks

    Explore more from Machine Learning

    ©2025  ementorhub.com. All rights reserved