February Fourier Talks 2018

Lawrence Carin

Duke University


On Adversarial Learning


We consider learning a generative model for general data, with imaging data considered for demonstration. We seek to match the model data distribution to the unknown distribution of observed samples. Learning is based on minimizing the Kullback-Leibler divergence in the reverse direction compared to that done typically. We demonstrate that this setup yields a learning framework similar to, but distinct from, the original generative adversarial network (GAN), where here we estimate an explicit "critic" in terms of a likelihood ratio. This framework addresses previously noted challenges with the original GAN, and it extends adversarial methods such that we may learn based on either an unnormalized distribution or the more-widely-assumed observed samples.

Back to FFT2018 speakers
Back to FFT2018 home