This is a graduate-level introduction to machine learning and probabilistic inference. Instead of teaching classical techniques like SVM and HMM, this course aims to provide students with principled perspectives on machine learning methodologies and contemporary approaches to building statistical models and algorithms. Our ultimate goal is that students, after taking this course, will be able to formulate a new model given a practical problem and choose appropriate algorithms to solve the problem.
Prerequisites
Machine learning is a subject that relies heavily on mathematical and statistical analysis. Students who take this course should have good understanding of linear algebra and elementary probabilistic theory.
Course Format
We have no exams!
This course is going to cover several topics. For each topic, we will have lectures, paper reading, homework, and discussions. At the end of the course, each student is required to give a final presentation.
Syllabus
- Monte Carlo Methods, Markov Chain Monte Carlo
- Graphical models: Bayesian Networks and Markov random fields
- Exponential family distributions and conjugate priors
- Variational inference methods
- Sum-product and max-product algorithms, Belief propagation
- Generalized linear model
- Empirical risk minimization and Stochastic gradient descent
- Proximal methods for optimization
- Gaussian Processes and Copula Processes
- Handling Big Data: Streaming process and Core sets
This sounds like a lot of stuff. Fortunately, you don’t have to memorize everything. Our goal is to provide you with an arsenal, from which you may find a weapon or two that are useful for you to tackle your problems.