Introduction
Machine learning is a broad field that spans reinforcement learning, deep learning, Bayesian nonparametrics, graphical models, probabilistic programming, and much more. In this course we will focus on a central theme: probabilistic inference and, to a lesser extent, modelling. Participants will leave well versed in the fundamentals of computational approaches to approximate inference in probability models.
Objectives
Understand several flavors of approximate inference and trade-offs between them. Also understand a variety of canonical machine learning models.
Lecture Schedule
10:30am-12:30pm LR7
- M: Bayes nets, factor graphs, sum product
- After-lecture reading: [Bishop 8]
- Tu: GMM and expectation maximization
- Before-lecture reading: [Bishop 9]
- W: HMM and learning
- Before-lecture reading: [Bishop 13, Murphy 17]
- Th: Sampling and Markov chain Monte Carlo
- Before-lecture reading: [Bishop 11] Neal ‘93
- F: Sequential Monte Carlo, LDS/Switching LDS
- Before-lecture reading: [Murphy 18, 23.4 - 23.6]
Lab Schedule
M-Fr 2pm-5pm, CDT Office 8th Flr. Thom
Homework Schedule
- HW 1: Due Wed. 2pm Implement belief propagation, run on small and large Bayes nets
- HW 2: Due Fri. 2pm Implement expectation maximization for Gaussian mixture model and Bayesian linear regression
- HW 3: Due (following) Mon. 2pm Implement MCMC and SMC inference algorithms for switching linear dynamical system and collapsed Bayesian GMM
Prerequisites
Calculus, linear algebra, probability and statistics, familiarity with programming
Required Books
- C. M. Bishop, ‘Pattern Recognition and Machine Learning’, 2006, Springer
- K. P. Murphy, ‘Machine Learning: A Probabilistic Perspective’, 2012, MIT Press