This week we scrutinized, in a discussion led by Shizhe Chen, two recent papers: “The Concrete Distribution: a Continuous Relaxation of Discrete Random Variables” by Chris Maddison and colleagues [1], and “Categorical Reparameterization by Gumbel-Softmax” by Eric Jang and collaborators [2]. Additionally, we considered a third paper: “GANS for Sequences of Discrete Elements with the Gumbel-Softmax Distribution” by Kusner and Hernández-Lobato [3]. These notes refer mainly to [1] and [2], which are currently under review for ICLR 2017. We also briefly address [3] at the end, which was presented in the recent “Adversarial Training” workshop at NIPS 2016.

Introduction

In science, it often occurs that theories, theorems, results, etc. are discovered or established at nearly the same time at different places. I hypothesize that this “ideas are in the air” phenomenon is not magic; instead, I believe those ideas have an increased likelihood of being discovered because they are — somehow — the most natural extensions of the fields in which they are being developed.

The striking similarities between the main idea of [1] and [2]; namely, the “Gumbel-Softmax trick for re-parameterizing categorical distributions” serves as an example of such simultaneous discovery in machine learning. Here, the underlying explanation for the coincidence is rather obvious: one of the most popular new techniques in variational inference and generative modeling; the so-called “re-parameterization trick,” could not be applied to discrete-valued random variables, imposing a significant hurdle for the development of new results. Now, with the Gumbel-Softmax trick as an add-on, we can do re-parameterization for inference involving discrete latent variables. This creates a new promise for new findings in areas where the primary objects are of discrete nature; e.g. text modeling.

Before stating the results we start by reviewing the re-parameterization trick and its uses.

The re-parameterization trick is a hot idea, but it fails on discrete data

Let’s begin by stating the re-parameterization trick (made popular in [4]). Let’s first recall the Law of the Unconscious Statistician (LOTUS), a simple rule of calculus stating that one can compute the expectation of a measurable function of a random variable by integrating with respect to the distribution function of , that is:

In other words, to compute the expectation of we only need to know and the distribution of . We do not need to explicitly know the distribution of . We can also express the above with the convenient alternative notation

Now, suppose a certain variable has a distribution that depends on a parameter , i.e . Moreover, assume one can express for a certain known function of the parameters and a certain noise distribution (e.g, ). From the LOTUS we know that for any measurable function :

However, by itself the above formula is not enough. Indeed, in our ML applications we will be faced with the need to compute

The second equality of the above expression constitutes another feature of the re-parameterization trick: we have conveniently expressed so that expectations of functions of can be expressed as integrals w.r.t a density that does not depend on the parameter; therefore, we exchange the expectation and gradient or “differentiate under the integral sign”.

The final feature of the re-parametrization trick has to do with how to use the above gradient formula to construct good unbiased estimates of the gradient, a task that we will often involved with (see below). The above formula gives an immediate way to obtain an unbiased estimate of the above gradient via Monte Carlo:

Although not completely understood, in real applications it is seen this re-parameterization based estimate of the gradient exhibits much less variance than of competing estimators.

Why do we need this trick: variational inference, and adversarial generative modeling

Let’s now briefly state two scenarios where we will need to apply the re-parameterization trick. They are not mutually exclusive (see [5] for a discussion on their relation), but here we state them separately for pedagogical reasons.

Variational inference

First, consider variational inference in a latent variable model: we want to have access (i.e evaluate, maximize) to the posterior , which will be usually intractable because of the evidence in the denominator. In a standard variational setup we find a variational approximation such that it minimizes its “distance” with the true posterior; or equivalently, it maximizes a lower bound for the log evidence,

Learning will be performed, then, by a double maximization of this surrogate function (the ELBO): with respect to in order to bridge the gap with the true posterior, and with respect to , to maximize the evidence.

The most direct approach for learning is to do gradient descent. However, unfortunately, gradients won’t be available in closed form because they are applied to the often intractable expectation with respect to . The standard, then, is to use gradient-based stochastic optimization, a family of methods that solve the above problem by assuming that noisy but unbiased estimators of the gradient are available: in that case, we can replace the true gradients by the noisy ones as long as we adapt the learning rate of our algorithms accordingly.

Whenever we can re-parameterize with respect to a noise distribution, , we will be able to construct a good unbiased estimator of the ELBO (essential for fast convergence of stochastic optimization methods) based on Monte Carlo samples, as we’ll have:

Adversarial learning of generative models

Another case where we can profit from re-parameterization comes from adversarial learning. Suppose we are concerned with learning a (perhaps, very high-dimensional) distribution based on observed samples . Here, we assume the generative parameters allow us to express the complex dependencies between the variables in (e.g. through a deep generative process) so that will be a rich, parsimonious parametric approximation of the empirical distribution .

As nicely stated in [5], if we assume that for some function and noise distribution we have , then, we can cast the (intractable) problem of learning as a minimax game where we simultaneously try to minimize with respect to generative model parameters and maximize with respect to the parameters of an ad-hoc discriminative network (e.g. a multi-layer perceptron). In other words, we do:

Adversarial learning algorithms iteratively sample batches from the data and noise distributions and use the noisy gradient information to simultaneously ascend in the parameters of (i.e, ) while descending in the parameters of (i.e, ).

Notice that if we were not able re-parameterize, we could only express the above as

Therefore, alternative unbiased estimates of the gradients of the second term (w.r.t ) would be required if stochastic optimization was attempted.

One alternative (which also applies to the variational setup) is to use the score-function estimator (also called the REINFORCE estimator by historical reasons), which is based on the log derivative trick. However, the variance of this new estimator can be so high that it could not qualify as a realistic alternative to the re-parameterization based estimator.

Why things can go wrong in discrete cases

The reason why we cannot apply the re-parameterization trick to discrete variables is simple: by elementary real analysis facts, it is mathematically imposible for a non-degenerate function that maps a continuous set onto a discrete set to be differentiable (not even continuous!). That is, for the functional relation it does not make any sense to conceive in the discrete case, regardless of the value of . Alternatively, in deep learning jargon: “we cannot backpropagate the gradients through discrete nodes in the computational graph”.

As stated above, in both cases (variational inference and adversarial generative modeling) we still will be able to construct alternative estimates of the gradients. However, they may not (and do not!) enjoy the low-variance property of the re-parameterization-based ones.

The Gumbel distribution and softmax function to the rescue

The Gumbel-softmax trick is an attempt to overcome the inability to apply the re-parameterization trick to discrete data. It is the result of two insights: 1) a nice parameterization for a discrete (or categorical) distribution is given in terms of the Gumbel distribution (the Gumbel trick); and 2) although the corresponding function is non-continuous, it can be made continuos by applying using a continuous approximation that depends on a temperature parameter, which in the zero-temperature case degenerates to the discontinuous, original expression. Now we describe both components

The Gumbel distribution trick

Let’s first recall what a Gumbel distribution is. The random variable is said to have a standard Gumbel distribution if with . For us, its importance is a consequence that we can parameterize any discrete distribution in terms of Gumbel random variables by using the following fact:

Let be a discrete random variable with random variable and let be an i.i.d sequence of standard Gumbel random variables. Then:

In other words, a recipe for sampling from a categorial distribution is: 1) draw Gumbel noise by just transforming uniform samples; 2) add it to , which only has to be known up to a normalizing constant; and 3) take the value that produces the maximum.

Relaxing the discreteness

Unfortunately, the operation that relates the Gumbel samples, the ’s and the realizations of the discrete distribution is not continuous. One way of circumvent this, as suggested in [1] and [2] is to relax the discrete set by considering random variables taking values in a larger set. To construct this relaxation we start by recognizing that 1) any discrete random variable can always be expressed as a one-hot vector (i.e, a vector filled zeros except for an index where the coordinate is one), by mapping the realization of the variable to the index of the non-zero entry of the vector, and 2) that the convex hull of the set of one-hot vector is the probability simplex:

Therefore, a natural way to extend (or ‘relax’) a discrete random variables is by allowing it to take values in the probability simplex. Both [1] and [2] propose to consider the softmax map (indexed by a temperature parameter):

with this definition we can define (instead of the discrete valued random variable ) the sequence of simplex-valued random variables:

The random variable defined as above is said to have the concrete distribution (concrete is a portmanteau between continuous and discrete, in the case you haven’t got the pun already), denoted . Its density (follows simply from the change of variable theorem and some integration) is given by:

In practice, what really matters is not the specific expression for the density, but the fact that this expression is in closed-form and can be evaluated exactly for different values of and . Indeed, in the standard methodology in variational inference, by looking at the definition of the ELBO we see we need to evaluate the entropy term which explicitly depends on the density.

Properties

Now we will briefly comment what the above definitions entail, to better understand the nature of this relaxation. The following four properties (first three from [1], fourth from [2]) are specially informative

  1. Rounding:
  2. Zero temperature:
  3. Convex eventually: if then is a log-convex function of x.
  4. For learning, there is a tradeoff between small and large temperatures.

The rounding property is important to conceive actual discrete samples in this relaxed framework. The rounding property is a simple consequence of the fact that is an increasing function, and implies that even in this relaxed regime with non-zero temperature, we can still easily sample from our original discrete distribution by mapping points in the simplex to the one-hot vector (an extreme-point of the simplex) with the non-zero coordinate so that is the closest to one. This property is exploited in [2] to construct the ‘Straight-Through’ Gumbel Estimator, needed in cases where one does not want to destroy the discrete structure in the hidden model.

The zero temperature property says things are well behaved in that in the zero-temperature limit we recover the original discrete distribution. One way to see this is to think in the logistic function modulated by a slope parameter. For high slope (low temperature) the logistic function will become the Heaviside function, taking only two values.

The log-convex function gives us a good guarantee for optimization: recall we will usually faced with optimization that involves the log density. The convexity property tell us we will be better off as long as the temperature we choose is low enough.

Finally, we (almost literally) cite the tradeoff commented in [2], which is stated as an empirical fact but a proof was unavailable:

For small temperatures the samples are close to one-hot but the variance of the gradients is large. For large temperatures, samples are smooth but the variance of the gradients is small.

Final remarks

As stated at the beginning, these result have provided a good solution to a common hurdle in machine learning, opening new directions for future research: first (and more immediately), related to the application of this method for the inference/generation of discrete objects. Indeed, [3] is already an example of a direct application of this techniques for the generation of text using GANS. A second sensible direction of research relates to the creation of new discrete reparameterizations based on relaxations, beyond the Gumbel distribution.

References

[1] Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. “The Concrete Distribution: a Continuous Relaxation of Discrete Random Variables.” ICLR Submission, 2017.

[2] Eric Jang, Shixiang Gu and Ben Poole. “Categorical Reparameterization by Gumbel-Softmax.” ICLR Submission, 2017.

[3] Matt Kusner and José Miguel Hernández-Lobato. “GANS for Sequences of Discrete Elements with the Gumbel-Softmax Distribution.” NIPS workshop on adversarial training, 2016.

[4] Diederik P Kingma, Max Welling. “Auto-Encoding Variational Bayes.” ICLR, 2014.

[5] Ian Goodfellow. “Generative Adversarial Networks.”, NIPS Tutorial, 2016.

[6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville and Yoshua Bengio. “Generative Adversarial Nets.” NIPS, 2014