Sequence Learning

Elective for CS grad students at the Technical University of Applied Sciences Nuremberg.

Class Schedule and Credits

Time and Location: Mondays at 9.45, HQ.104

Announcements and Discussions: Moodle course 5312.

Format:

Each week, we will discuss algorithms and their theory before implementing them to get a better hands-on understanding. Java is suggested, pairprogramming encouraged, BYOD strongly recommended!

Credits:

We’ll adopt a common research routine: identify a problem, research prior work, engineer a solution, write it up in a paper, review other papers, present your work. Credits are earned through

  • your 6 page paper submitted by June 24 (60%)
  • reviewing 3 other papers by July 1 (20%)
  • presenting your work on July 8 (tentative date). (20%)

Note: Materials will be in English, the lectures/tutorials will be taught in German; class project in language of choice.

  • Niemann, H: Klassifikation von Mustern. 2. Überarbeitete Auflage, 2003 (available online)
  • Huang, Acero, Hon: Spoken Language Processing: A Guide to Theory, Algorithm and System Development. (ISBN-13: 978-0130226167)
  • Jurafsky, D and Martin, J: Speech and Language Processing. 2017 (available online)
  • Manning, C, Raghavan P and Schütze, H: Introduction to Information Retrieval, Cambridge University Press. 2008. (available online)
  • Goodfellow, I and Bengio,Y and Courville, A: Deep Learning. 2016 (available online)

Syllabus

  • March 18: Introduction. (slides, exercise)

    We’ll start with the general concepts of supervised vs. unsupervised learning and classification of independent observations vs. sequences of observations. To get you motivated, we’ll look at a list of recent “AI products” that utilize sequence learning.

  • March 25: Auto-Correct. (slides by Ben Langmead, exercise)

    We’ll start with a classic implementation of auto-correcting mispelled words to bring dynamic programming back to memory. We’ll also look at scalability regarding computation and memory efforts.

  • April 1: States and Cost Functions. (slides, exercise)

    Understand how DP can be used on an abstraction of distances and states. We’ll build a smarter, keyboard layout aware auto-correct and start looking into some applications in signal processing (isolated word and DTMF sequence classification).

  • April 8: Modeling Sequences. (slides, exercise)

    Learn about n-grams, a simple yet effective approach to learn contexts of distcrete symbols. We’ll use n-grams to improve our auto-correct by incorporating context and suggesting following words.

  • April 15: Hidden Markov Models. (slides, exercise)

    We’ll take a close look at hidden Markov models and how to (efficiently) evaluate and train them. The Viterbi decoding algorithm tells us the most likely sequence and the path that lead to it. We’ll use them to build a proof-of-concept isolated word recognizer.

  • April 22: no class (Easter)

  • April 29: Higher-Level Sequence Modeling with HMM. (slides, exercise)

    Learn how to model complex sequences of arbitrary length that prohibit explicit modeling, such as speech recognition or choreographies in sports. Here we will combine what we’ve discussed so far: prefix trees, n-gram models and efficient search.

The remaining syllabus is still subject to change!

  • May 6: Feed-Forward Neural Networks. (slides perceptron and nnets, fizzbuzz.py, exercise).

    A brief introduction to neural networks: fundamentals, topologies and training. We’ll skip implementing the details and use tensorflow for the examples. Did you know that you could program fizzbuzz as a neural network? Please have Python with Numpy and TensorFlow installed and operational on your machine!

  • May 13: Recurrent Neural Networks. (slides cs231n: RNNs, exercise)

    Recurrent neural networks use feedback loops to introduce temporal context or “memory” into the network. We’ll study them using two examples: language modeling and drawing classification.

  • May 20: Sequence Kernels and Embeddings for Instance Classification

    In many cases, classifying a sequence into a discrete class does not quite work with recurrent networks. We’ll learn about sequence kernels and embeddings that map sequences into a single observation of a continuous space, that can then be used by conventional classifiers.

  • May 27: Sequence to Sequence Learning. (literature and exercise, attention slides, CCL’17 tutorial slides)

    Previous algorithms explicitly modeled the sequence, either as a graph-like structure such as an HMM or by concatenating observations to a single data point. Encoder-decoder networks are a special kind of topology of recurrent neural networks that can be used to model sequence to sequence mappings, such as found in end-to-end speech recognition, machine translation or automatic summarization – without explicitly modeling states!

  • June 3: Project Kickoff

  • June 10: no class (Whit Monday)

  • June 17: Project Check-In

  • June 24: Deep Learning: Practical Considerations. (slides: toolkits, practical considerations, deployment, exercise)

    Papers due!

    We’ll compare different deep learning toolkits and their requirements or potential to get a grip on what’s necessary to apply them to a new problem.

  • July 1: tbd Reviews due.

  • July 8 (tentative date) Present your work.

Subscribe to https://github.com/sikoried/sequence-learning/ repository to follow updates.