This is a follow-up to my previous post. As I mentioned, Dar lead our discussion about two basically unrelated papers. This post is about the second of the two, “A Variational Perspective on Accelerated Methods in Optimization” by Wibisono et al. [1]. This is a more theoretical paper investigating the nature of accelerated gradient methods and the natural scope for such concepts. Here we’ll introduce and motivate some of the mathematical aspects and physical intuition used in the paper, along with an overview of the main contributions.

Generally, whenever we are able to prove lower bound results about the performance of a class of algorithms to solve a problem, we would like a (practically applicable) method which attains this lower bound. In the case of optimizing a convex function with Lipschitz, we can obtain a lower bound of for the rate of convergence (after steps) of any procedure depending linearly on the gradients of ; however, vanilla gradient descent only attains a rate of . Nesterov’s accelerated gradient descent, on the other hand, achieves this lower bound rate; much more widely, the phenomenon of acceleration allows us to at least improve convergence rates, and in certain circumstances obtain the optimal rate exactly.

However, the nature of acceleration is not well understood. This paper attempts to place Nesterov’s approach as a methodology for the discretization of a certain class of differential equations. These differential equations are derived as solutions to the Euler-Lagrange problem of minimizing what the authors call the Bregman Lagrangian, a functional depending (partially) on the Bregman divergence (this being the only link to the generative models paper we also discussed).

Nesterov’s accelerated gradient descent

I probably don’t need to remind you of the defining equation for vanilla gradient descent - but here it is anyways: given an initial position and a sequence of step sizes , we perform iterative updates of the form

Nor do I likely need to remind you of where it can go wrong and why we’d like to be able to do better. For now though, we’re only concerned about one aspect - that the convergence rate of gradient descent is after steps (under the regularity conditions mentioned above in the introduction), whereas the oracle is , and so we’d like to try and attain it.

Thankfully, we’re able to do so, via the following algorithm proposed by Nesterov [4]. We begin by defining the following sequences:

Note that is the positive solution to the quadratic equation , and that . Taking a pair of initializations with , we then update as follows (given some arbitary ):

In other words, we perform gradient descent steps, but then update as a weighted average from the current and past step. However, why this approach attains the oracle (as compared to why it provides a speed up) isn’t well understood - looking at the algorithm itself doesn’t provide much intuition, and looking at the derivation of its convergence rate isn’t illuminating either. The approach by the authors which we’re about to discuss is one particular way of looking at how this phenomenen can arise.

Optimization in non-Euclidean geometries and the Bregman Lagrangian

Consider the optimization problem of minimizing over , where is a continuously differentiable convex function, with a unique minimizer . To consider a non-Euclidean setting, we suppose is endowed with a distance generation function which is continuous differentiable, convex, and is such that as . This can then be used to define the Bregman divergence

which is non-negative as is convex. (Note that this is not a metric, as it is neither necessarily symmetric not satisify the triangle inequality.) When and are close to each other, this is approximately equal to the Hessian metric

The authors then introduce (albeit without much intution) the Bregman Lagrangian

where denotes position at time , the velocity at time , and times belong to an interval of time. Here, the functions are all continuously differentiable functions of time. From a physical perspective,

  • plays the role of a potential function, whose damping is controlled by ,
  • plays the role of the kinetic energy (which can be motivated by the approximation to the Hessian metric)
  • controls the damping of the velocity term, and that of the overall Lagrangian.

In such a scenario, if we have kinetic energy and potential energy , we would set the Lagrangian to be ; it is clear to see how the Bregman Langrangian is similar in this respect. We would then like to find a path which minimizes the action

(where over-dot’s denote differentiation with respect to time) which is a functional on paths. The principle of least action from physics states that the true observed motion of a particle with Langrangian corresponds to that which minimizes the action; this is the reason why we are interested in such quantities.

The Euler-Lagrange equation

We can then use the calculus of variations in order to find a curve which minimizes this functional. In order to explain how we can try and do this, we can use the following intutition for when considering the simpler problem of finding the minimizing point of a curve for , for example. Generally, the idea of a stationary point in ordinary calculus is that a function is stationary at a point when, after perturbing the function slightly, it does not change too much in the sense that

If is sufficiently differentiable so that for any point we know that (say via Taylor’s theorem)

then we know that points for which are stationary points. From this, we can then build up critera for when stationary points are local/global minima/maxima and so on.

The same idea is used in the calculus of variations; we aim to find paths for when we perturb them slightly by a new path, the corresponding action does not change too much. However, there is a lot more freedom in what it means to perturb away from a path than for a simple point; although this seems like it may be a problem, provided we formulate our pertubations correctly, it is actually quite advantageous. Namely, we consider perturbing the path by another so that the end-points of the new path are unchanged, and examine the behaviour as the scale of changes, which we can do simply by perturbing by for a scalar , and examine what happens as . We now express this idea mathematically.

In the simplest case, we impose boundary conditions on at the end points of , so if , we enforce that and (say). Given this, we then seek to find paths for which

whenever is a (sufficently smooth) path with , and here acts as a scaling of the pertubation. (The subscript on the big-Oh term denotes that the exact scaling depends on .) As is a function of (keeping for now fixed), by using Taylor’s theorem, integration by parts and the boundary conditions , it is a straightforward exercise to show that

As we want the term to be equal to for all “pertubed paths” , it follows that the term must be equal to zero, and so stationary paths of the action correspond to solutions of the Euler-Lagrange equation

As expected from ordinary calculus, a path being stationary is a necessary criterion for it to be a local/global minima/maxima of the action; sufficient conditions can be motivated along a similar line as to how we do so in ordinary calculus.

Back to the Bregman Lagrangian

The Euler-Lagrange equation for the Bregman Lagrangian gives rise to a second order differential equation of the form (under the assumption that the Hessian matrix is invertible)

In order to simplify it slightly, the authors impose the ideal scaling conditions

so that the Euler-Lagrange equation simplifies to (after using the second scaling condition)

where the only change is that the last term has been removed.

As the overall plan is to find discretized solutions for this class of ODE’s and analyse their convergence properties, we first want to obtain a convergence rate for exact solutions to the Euler-Lagrange equation. A frequent tool in the analysis of ODE’s and PDE’s are that of energy methods - from a physical perspective, we are interested in when quantities such as energy are conserved - which allow us to analyse the behaviour of the system. The authors then define an energy functional of the form

Again a physical perspective can be used to motivate this. As mentioned before, if a system has kinetic energy and potential energy , we let the Lagrangian be of the form . The corresponding Hamiltonian, corresponding to the total energy of the system, would be of the form ; the above energy functional therefore can be thought of corresponding to the system without the extra damping in the original Langrangian. However, it is worth noting that here does not correspond to the true Hamiltonian, which is given by

where is the convex dual of and now corresponds to the momentum. Anyways, under the ideal scaling conditions, it is then straightforward to show that if is a solution to the Euler-Lagrange equation, is non-increasing in time (this is where the first scaling condition is used), and so it follows that

for some constant . The authors then show that the family of Lagrangians is closed under time dilations (that is, upon time changes ) after suitable transformation of .

Discretization schemes

Letting be a parameter, if we choose the choice of parameters

where is a constant, then the ideal scaling conditions are satisfied, and the corresponding Euler-Lagrange equation

corresponds to the continuous-time limit of Nesterov’s accelerated mirror descent (when ) and that of Nesterov’s accelerated cubic-regularized Newton’s method (when ).

The idea is to now discretize the above ODE in a way so that the discrete-time algorithm attains the same convergence rate as that in the continuous time case. The authors note that a naive discretization appears to be unstable (via empirical observation), and ultimately show that, provided and some other smoothness constraints on and hold, and provided is not sufficiently large, the following discretization sceheme has an convergence rate:

where is the rising factorial, and is the -st order Taylor approximation of centered at . One can show that the case will recover Nesterov’s original method in some form. However, while this then gives an algorithm which works for all , from a practical perspective it is unclear on whether it is worthwhile considering considering higher orders of than 2 (due to the need to calculate at each iteration).

Conclusion

By considering discretization schemes of the Euler-Lagrange equations corresponding to a family of Lagrangians, the authors are able to recover acceleration phenomenen for convergence using techniques motivated by Nesterov’s original arguments. However, there are still some interesting observations of the Bregman Lagrangian to be explored further; for example, whether the corresponding Hamiltonian can be studied further and whether the gauge invariance of the Euler-Lagrange equation can be used to give more natural interpretations of the role of the function . In addition, Dar also mentioned that there is a similarity in the comparison between vanilla and accelerated gradient descent, and that of Adagrad [2] and Adam [3]; currently there is no known link to a family of ODE’s for these algorithms, and so it would be interesting to see if there is a similar approach could be used here.

References

[1] Wibisono, Andre, Ashia C. Wilson, and Michael I. Jordan. “A Variational Perspective on Accelerated Methods in Optimization.” arXiv preprint arXiv:1603.04245 (2016). link

[2] Duchi, John, Elad Hazan, and Yoram Singer. “Adaptive subgradient methods for online learning and stochastic optimization.” Journal of Machine Learning Research 12. Jul (2011): 2121-2159. link

[3] D.P. Kingma, J. Ba. “Adam: A Method for Stochastic Optimization”. The International Conference on Learning Representations (ICLR), San Diego, 2015. link

[4] Y. Nesterov. “A method of solving a convex programming problem with convergence rate ”. Soviet Mathematics Doklady, 27(2):372–376, 1983.