Advances in GANs based on the MMD

Host: Mauricio Alvarez


Date
Event
Machine Learning Seminar
Location
Ada Lovelace (Regent Court COM-108)
Links

Abstract

Generative adversarial networks have led to huge improvements in sample quality for image generation. But their success is hindered by both practical and theoretical problems, leading to the proposal of a huge number of alternative methods over the last few years. We study one of these alternatives, the MMD GAN, which uses a similar architecture to an original GAN but does some of its optimization in closed form, in a Hilbert space. We deepen the understanding of these models, with a particular focus on the behavior of gradient penalties – inspired by the WGAN-GP and the more recent Sobolev GAN – in this context. Based on this, we propose a method to constrain the gradient analytically, rather than with an additive optimization penalty. We demonstrate that MMD GANs with gradient penalties improve on the existing state of the art, the WGAN-GP; our new method, the Scaled MMD GAN, does even better on unsupervised image generation on CelebA and ImageNet.

Based on joint work with Michael Arbel, Mikołaj Bińkowski, and Arthur Gretton.

Biography

Dougal Sutherland is a postdoc at University College London’s Gatsby Computational Neuroscience Unit, working with Arthur Gretton; he completed his PhD in 2016 from Carnegie Mellon University, advised by Jeff Schneider. His research focuses on problems of learning about distributions from samples, including density estimation, two-sample testing, training implicit generative models, and distribution regression.