Sequence Models for Time Series and Natural Language Processing

Por: Coursera . en: , ,

  • Working with Sequences
    • In this module, you’ll learn what a sequence is, see how you can prepare sequence data for modeling, and be introduced to some classical approaches to sequence modeling and practice applying them.
  • Recurrent Neural Networks
    • In this module, we introduce recurrent neural nets, explain how they address the variable-length sequence problem, explain how our traditional optimization procedure applies to RNNs, and review the limits of what RNNs can and can’t represent.
  • Dealing with Longer Sequences
    • In this module we dive deeper into RNNs. We’ll talk about LSTMs, Deep RNNs, working with real world data, and more.
  • Text Classification
    • In this module we look at different ways of working with text and how to create your own text classification models.
  • Reusable Embeddings
    • Labeled data for our classification models is expensive and precious. Here we will address how we can reuse pre-trained embeddings to make our models with TensorFlow Hub.
  • Encoder-Decoder Models
    • In this module, we focus on a sequence-to-sequence model called the encoder-decoder network to solve tasks, such as Machine Translation, Text Summarization and Question Answering.
  • Summary
    • In this final module, we review what you have learned so far about sequence modeling for time-series and natural language data.