From GANs to Variational Divergence Minimization

Download this episode

Download Video

Description

An important problem in achieving general artificial intelligence is the data-efficient learning of representations suitable for causal reasoning, planning, and decision making. Learning such representations from unsupervised data is challenging and requires flexible models to discover the underlying manifold of high-dimensional data. Generative adversarial networks (GAN) are such flexible families of distributions that have shown promise in unsupervised learning and supervised regression tasks. We show that the learning objective of GANs are variational bounds on a divergence between two distributions, allowing us to extend the GAN objective to general f-divergences, including the Kullback-Leibler divergence. We call this more general principle variational divergence minimization. The generalization of GANs to f-divergences also allows us to treat GANs as a building block in standard machine learning problems. We demonstrate this by extending the variational Bayes inference procedure to the adversarial case, allowing us to use likelihood-free variational families and provide more accurate posterior inferences. GANs therefore are both promising as a building block in larger system and for solving the unsupervised learning problem.

Embed

Format

Available formats for this video:

Actual format may change based on video formats available and browser capability.

    The Discussion

    Add Your 2 Cents