Structured Data: Learning, Prediction, Dependency, Testing:
- Goal:
- Many real-world applications involve objects with an explicit or implicit structure. Social networks, protein-protein interaction networks, molecules, DNA sequences and syntactic tags are instances of explicitly structured data while texts, images, videos, biomedical signals are examples with implicit structure. The focus of the course is solving learning and prediction tasks, estimation of dependency measures, and hypothesis testing under this complex/structured assumption.
- While in learning and prediction problems the case of structured inputs has been investigated for about three decades, structural assumption on the output side is a significantly more
challenging and less understood area of statistical learning. The first part of the course provides a transversal and comprehensive overview on the recent advances and tools for this
exploding field of structured output learning, including graphical models, max margin approaches as well as deep learning. The covered methods can be categorized into two sub-classes: scoring and energy-based techniques, and structured output regression algorithms.
- The second part of the course gives an alternative view on the structured problem family, dealing with topics on dependency estimation and hypothesis testing. Emerging methods in these fields can not only lead to state-of-the-art algorithms in several application areas (such as blind signal separation, feature selection, outlier-robust image registration, regression problems on probability distributions), but they also come with elegant performance guarantees, complementing the regular statistical tools restricted to unstructured Euclidean domains. We are going to construct features of probability distributions which will enable us to define easy-to-estimate independence measures and distances of random variables. As a byproduct, we will get nonparametric extensions of the classical t-test (two-sample test) and the Pearson correlation test (independence test).
- Lecturers: Florence d'Alché-Buc, Zoltán Szabó,
Slim Essid,
Arthur Tenenhaus,
Alexandre Garcia (Datalab).
- Prerequisites:
- The course requires a basic knowledge of kernel methods, graphical models, deep learning, optimization and functional analysis.
- Exam: Project.
- Session 1 (Jan. 15):
- Lecture by Florence.
- Introduction, overview of structured output prediction, beginning of energy-based methods, max-margin methods.
- Session 2 (Jan. 22):
- Lecture by Slim.
- Conditional random fields (CRF), sequence labelling.
- Session 3 (Jan. 29):
- Datalab by Slim and Alexandre.
- Practice of CRF and M3N.
- Session 4 (Feb. 5):
- Lecture by Florence.
- Multi-Task Learning, Operator-Valued Kernels for Multi-Task Learning.
- Session 5 (Feb. 12):
- Lecture by Arthur.
- Multiway Data Analysis.
- Session 6 (Feb. 26):
- Lecture by Florence.
- Output Kernel Regression - Deep Structured Output Learning.
- Session 7-10 (Mar. 5, 12, 19, 26):
- Lecture by Zoltán. [slides (March 5, 12, 19, 26)]
- Kernel canonical correlation analysis, mean embedding, maximum mean discrepancy, integral probability metric, characteristic/universal kernel, Hilbert-Schmidt independence criterion, covariance operator, Hilbert-Schmidt norm.
- Kernel based two-sample and independence tests. Quadratic and linear-time methods.