Tutorials

This page contains a series of tutorials that I hope would be helpful.

Talks

Here is a list of talks that I gave over the years.


Learning from Distributions via Support Measure Machines
Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, Bernhard Schoelkopf

Abstract : This paper presents a kernel-based discriminative learning framework on probability measures. Rather than relying on large collections of vectorial training examples, our framework learns using a collection of probability distributions that have been constructed to meaningfully represent training data. By representing these probability distributions as mean embeddings in the reproducing kernel Hilbert space (RKHS), we are able to apply many standard kernel-based learning techniques in straightforward fashion. To accomplish this, we construct a generalization of the support vector machine (SVM) called a support measure machine (SMM). Our analyses of SMMs provides several insights into their relationship to traditional SVMs. Based on such insights, we propose a flexible SVM (Flex-SVM) that places different kernel functions on each training example. Experimental results on both synthetic and real-world data demonstrate the effectiveness of our proposed framework.


Hilbert Space Embedding for Dirichlet Process Mixtures
Krikamol Muandet

Abstract: This paper proposes a Hilbert space embedding for Dirichlet Process mixture models via a stick-breaking construction of Sethuraman. Although Bayesian nonparametrics offers a powerful approach to construct a prior that avoids the need to specify the model size/complexity explicitly, an exact inference is often intractable. On the other hand, frequentist approaches such as kernel machines, which suffer from the model selection/comparison problems, often benefit from efficient learning algorithms. This paper discusses the possibility to combine the best of both worlds by using the Dirichlet Process mixture model as a case study.


Domain Generalization via Invariant Feature Representation
Krikamol Muandet, David Balduzzi, and Bernhard Schoelkopf

Abstract: This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.


Kernel Mean Estimation and Stein Effect
Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, and Bernhard Schoelkopf

Abstract: A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is an important part of many algorithms ranging from kernel principal component analysis to Hilbert-space embedding of distributions. Given a finite sample, an empirical average is the standard estimate for the true kernel mean. We show that this estimator can be improved due to a well-known phenomenon in statistics called Stein’s phenomenon. After consideration, our theoretical analysis reveals the existence of a wide class of estimators that are better than the standard one. Focusing on a subset of this class, we propose efficient shrinkage estimators for the kernel mean. Empirical evaluations on several applications clearly demonstrate that the proposed estimators outperform the standard kernel mean estimator.


Towards a Learning Theory of Cause-Effect Inference
David Lopez-Paz, Krikamol Muandet, Bernhard Schoelkopf, and Ilya Tolstikhin

Abstract: We pose causal inference as the problem of learning to classify probability distributions. In particular, we assume access to a collection {(Si,li)}, where each S_i is a sample drawn from the probability distribution of Xi×Yi, and li is a binary label indicating whether "Xi→Yi" or "Xi←Yi". Given these data, we build a causal inference rule in two steps. First, we featurize each Si using the kernel mean embedding associated with some characteristic kernel. Second, we train a binary classifier on such embeddings to distinguish between causal directions. We present generalization bounds showing the statistical consistency and learning rates of the proposed approach, and provide a simple implementation that achieves state-of-the-art cause-effect inference. Furthermore, we extend our ideas to infer causal relationships between more than two variables.

Slides

Below are the slides that I have prepared for various talks over the years that I would like to share.