# Upcoming

# A Variational Perspective on Accelerated Methods in Optimization

This is a follow-up to my previous post. As I mentioned, Dar lead our discussion about two basically unrelated papers. This post is about the second of the two, “A Variational Perspective on Accelerated Methods in Optimization” by Wibisono et al. [1]. This is a more theoretical paper investigating the nature of accelerated gradient methods and the natural scope for such concepts. Here we’ll introduce and motivate some of the mathematical aspects and physical intuition used in the paper, along with an overview of the main contributions.

Continue reading# Implicit Generative Models — What are you GAN-na do?

A few weeks ago, Dar lead our discussion of “Learning in Implicit Generative Models” by Mohamed and Lakshminarayanan [1]. This paper gives a good overview of techniques for learning in implicit generative models, and has links to several of the areas we’ve discussed this past year, which I’ll reference throughout.

Continue reading# Reparameterization Gradients through Rejection Sampling Algorithms

This post begins with an apparent contradiction: on the one hand, the reparameterization trick seems limited to a handful of distributions; on the other, every random variable we simulate on our computers is ultimately a reparameterization of a bunch of uniforms. So what gives? Our investigation into this question led to the paper, “Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms,” which we recently presented at AISTATS [1]. In it, we debunk the myth that the gamma distribution and all the distributions that are derived from it (Dirichlet, beta, Student’s t, etc.) are not amenable to reparameterization [2-5]. We’ll show how these distributions can be incorporated into automatic variational inference algorithms with just a few lines of Python.

Continue reading