CS 6890: Deep Learning
Spring 2018


Time and Location: Tue, Thu, Fri 9:00 – 10:20am, ARC 121
Instructor: Razvan Bunescu
Office: Stocker 341
Office Hours: Tue, Thu 10:30 – 11:00am, or by email appointment
Email: bunescu @ ohio edu

Textbook:
  • Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville. MIT Press, 2016.

  • Recommended introductory material:
  • Machine Learning @ Coursera video lectures and exercises.
  • Machine Learning @ Stanford video lectures and exercises.
  • Machine Learning @ Ohio course website.

  • Supplemental deep learning lectures:
  • Deep Learning course sequence @ Coursera.
  • Deep Learning and Reinforcement Learning summer school, 2017 @ Montreal Institute for Learning Algorithms.
  • CS 231n: Convolutional Neural Networks for Visual Recognition, 2017 @ Stanford.
  • CS 598: Cutting-Edge Trends in Deep Learning and Recognition, 2017 @ Illinois.

  • Course description:
    This course will introduce the multi-layer perceptron, a common deep learning architecture, and its gradient-based training through the backpropagation algorithm. Fully connected neural networks will be followed by more specialized neural network architectures such as convolutional neural networks (for images), recurrent neural networks (for sequences), and memory-augmented neural networks. The later part of the course will explore more advanced topics, such as generative adversarial networks and deep reinforcement learning. The lectures will cover theoretical aspects of deep learning models, whereas homework assignments will give students the opportunity to build and experiment with shallow and deep learning models, for which skeleton code will be provided.

    Proposed topics:
    Logistic and Softmax Regression, Feed-Forward Neural Networks, Backpropagation, Vectorization, PCA and Whitening, Deep Networks, Convolution and Pooling, Recurrent Neural Networks, Long Short-Term Memory, Gated Recurrent Units, Neural Attention Models, Sequence-to-Sequence Models, Memory Networks, Distributional Representations, Generative Adversarial Networks, Deep Reinforcement Learning.

    Prerequisites:
    Previous exposure to basic concepts in machine learning, such as: supervised vs. unsupervised learning, classification vs. regression, linear regression, logistic and softmax regression, cost functions, overfitting and regularization, gradient-based optimization. Substantial experience with programming and familiarity with basic concepts in linear algebra and statistics.

    Lecture notes:
    1. Syllabus & Introduction
    2. Logistic Regression, Softmax Regression, and Gradient Descent
    3. Linear algebra and optimization in Python
    4. Machine Learning with PyTorch
    5. Feed-Forward Neural Networks and Backpropagation
    6. Unsupervised Feature Learning with Autoencoders
    7. PCA, PCA whitening, and ZCA whitening
    8. Variational Autoencoders

    Homework assignments:
    1. Assignment and code.
    2. Assignment and code.
    3. Assignment and code and data.


    Final Project:
    Paper Presentations
    Online reading materials: