CPSC 532D: Modern Statistical Learning Theory – Fall 2024 (24W1)
Instructor: Danica Sutherland (she): dsuth@cs.ubc.ca, ICICS X563.
TA: TBD.
Lecture info: Mondays/Wednesdays, 13:00 - 14:30, Dempster 201.
Office hours: TBD, hybrid in ICICS X563 + Zoom unless announced otherwise.
We'll use Piazza and Gradescope; links coming closer to the start of term.
Previously offered in 23W1, 22W1, and (with the name 532S) 21W2; this instance will be broadly similar.
This is a course on the mathematical foundations of machine learning. When should we expect ML algorithms to work (or not work), and what kinds of assumptions do we need to make to rigorously prove this?
Schedule
Italicized entries are tentative.
The lecture notes (to be linked) are self-contained,
but the supplements column also refers to the following books (all available as free pdfs) for more details / other perspectives:
- SSBD (2014) is the easiest to understand but sometimes over-simplified (sometimes in ways that make it incorrect);
- MRT (second edition in 2018) can be written in a more opaque way, but is at about the level of detail we mostly use;
- Bach (draft as of 2024) is maybe similar to MRT, in my opinion easier to follow but has a somewhat different focus (more stats-y than CS-y);
- Zhang (2023) is usually at a more advanced, and often more abstract, level than we mostly cover.
Date | Topic | Supplements |
M | Sep 2 | No class: Labour Day |
W | Sep 4 | Course intro, ERM | SSBD 1-2; MRT 2; Bach 2 |
M | Sep 4 | Assignment 1 posted |
M | Sep 9 | Uniform convergence with finite classes | SSBD 2-4; MRT 2 |
W | Sep 11 | Concentration inequalities | SSBD B; MRT D; Bach 1.2 Zhang 2; Wainwright 2 |
M | Sep 16 | Assignment 1 due at noon |
M | Sep 16 | PAC learning; covering numbers | SSBD 3, MRT 2 Bach 4.4.4, Zhang 3.4/4/5 |
M | Sep 16 | Drop deadline |
W | Sep 18 | Rademacher complexity | MRT 3.1; SSBD 26 Bach 4.5; Zhang 6 |
M | Sep 23 |
W | Sep 25 | | |
M | Sep 30 | No class: National Day for Truth and Reconciliation |
W | Oct 2 | | |
M | Oct 7 | | |
W | Oct 9 | |
M | Oct 14 | No class: Thanksgiving Day | |
W | Oct 16 | | |
M | Oct 21 | | |
W | Oct 23 | | |
F | Oct 25 | Withdrawal deadline |
M | Oct 28 | | |
W | Oct 30 | | |
M | Nov 4 | | |
W | Nov 6 | | |
M | Nov 11 | No class: midterm break / Remembrance Day |
W | Nov 13 | No class: midterm break |
M | Nov 18 | | |
W | Nov 20 | | |
M | Nov 25 | | |
W | Nov 27 | | |
M | Dec 2 | | |
W | Dec 4 | | |
? | Dec ?? | Final exam (in person, handwritten) — sometime Dec 10-21, likely Dec 16-21 to avoid NeurIPS |
Logistics
The course meets in person in Dempster 201, with possible rare exceptions (e.g. if I get sick but can still teach, I'll move it online). Note that this room does not have a recording setup; I may do a janky workaround, and will definitely publish thorough lecture notes (see last year's website for examples), but you should plan on usually coming to class.
Grading scheme: 70% assignments, 30% final.
- There will be four or five assignments involving proving and exploring various things related to the course material through the term. These should be written in LaTeX and handed in on Gradescope; expect each one to take a significant amount of time. These will mostly be proofs / similar math and conceptual questions; there may occasionally be some light programming involved, or maybe not. Nothing major on the programming side, just exploring some concepts / running some experiments to see how it interacts with the math.
- There will be one or two assignments that involve reading a paper, reacting to it, poking at its assumptions slightly further, and possibly answering questions about it orally; details to come.
- The final will be a traditional handwritten, in-person exam.
Overview
Definitely covered: PAC learning, VC dimension, Rademacher complexity, concentration inequalities, margin bounds, stability. Also, most of: PAC-Bayes, analysis of kernel methods, limitations of uniform convergence, analyzing deep nets via neural tangent kernels, provable gaps between kernel methods and deep learning, online learning, feasibility of private learning, compression-based bounds.
Prerequisites
There are no formal prerequisites.
TL;DR: if you've done well in CS 340/540 or 440/550, didn't struggle with the probability stuff there, and are also pretty comfortable with proofs, you'll be fine. If not, keep reading.
I will roughly assume the following; if you're missing one of them, you can probably get by, but if you're missing multiple, talk to me about it.
- Basic "mathematical maturity": familiarity with reading and writing proofs, recognizing a valid proof, etc. Math 220 or having succeeded at most third-year or up courses in math or theoretical CS/stat should be indicative of this being fine.
- Ideally, a basic understanding of machine learning, as in CS 340 or probably 330/related courses. If you don't have this, you should still be able to get by, but might have to do a little more reading on your own.
- Probability: basic comfort with manipulating probability statements, etc. Anyone having done well in CS 420, 436R/536N, 440/550, 532W, most graduate Stat classes, many graduate CS theory classes, etc should be fine. Ideally, Math 302/318/418 or Stat 302. Also probably should be fine: Stat 241/251, Econ 325/327.
- Linear algebra: ideally Math 310, but Math 152/221/223 are fine; anyone who's done well in CS 340/540 or 440/550 should probably be fine. We don't need anything special here, I'm just going to write matrix expressions and assume you know what's going on. Later in the course we will use a little bit of abstract vector spaces / Hilbert spaces, but we'll cover that if you haven't seen it before.
- Multivariate calculus: maybe Math 200/217/226/253/263; anyone who's done well in CS 340/540 or 440/550 should be fine. Nothing special here either, I'm just not going to explain what a gradient is.
- Analysis of algorithms: CS 221 (basic algorithms + data structures) should be fine to get by, though CS 421 (NP-completeness, etc) would be the best case. Any statisticians or mathematicians who haven't taken this, you should be fine.
- Basic mathematical programming ability in any language: being able to plot functions, etc.
- Ideally, familiarity with programming in a machine learning / statistical context, e.g. being comfortable with numpy and PyTorch/TensorFlow/etc. This is not required, but there may be some options may be easier / more fruitful / more fun if you're comfortable with it.
- In the absolute best case, real analysis and functional analysis (eg Math 320, 321, 420, 421). This is very much not required, and most people who've succeeded in the course haven't taken this, but we only use a few tools from these fields and I'll explain what we need from them as we see them.
If you have any specific questions about your background, please ask.
Resources
If you need to refresh your linear algebra or other areas of math:
- Mathematics for Machine Learning (Marc Deisenroth, Aldo Faisal, Cheng Soon Ong; 2020) – a nice overview of basics.
- Linear Algebra Done Right (Sheldon Axler, fourth edition 2024) – a great introduction towards the abstract sides of linear algebra; the book I used for self-study as a teenager.
In addition to the books above, some other points of view you might like:
Measure-theoretic probability is not required for this course, but there are instances and related areas where it could be helpful:
- A Measure Theory Tutorial (Measure Theory for Dummies) (Maya Gupta) – 5 pages, just the basics
- Measure Theory, 2010 (Greg Hjorth) – 110 pages but comes recommended as both thorough and readable
- A Probability Path (Sidney Resnick) – frequently recommended textbook aimed at non-mathematicians to learn it in detail, but it's a full-semester textbook scale of detail; available if you log in via UBC
- There are also lots of other places, of course; e.g. the probability textbooks by Billingsley, Klenke, and Williams are (I think) classics.
Similar courses: