Thursday, 5 March 2020

Gradient Descent Unstable For GANs?

When training neural networks we use gradient descent to find a path down a loss function to find the combination of learnable parameters that minimise the error. This is a very well researched area and techniques today are very sophisticated, the Adam optimiser being a good example.

The dynamics of a GAN are different to a simple neural network. The generator and discriminator networks are trying to achieve opposing objectives. There are parallels between a GAN and adversarial games where one player is trying to maximise an objective while the other is trying to minimise it, each undoing the benefit of the opponent’s previous move.

Is the gradient descent method of finding the correct, or even good enough, combination of learnable parameters suitable for such adversarial games? This might seem like an unnecessary question, but the answer is rather interesting.


Simple Adversarial Example


The following is a very simple objective function:


One player has control over the values of x and is trying to maximise the objective f. A second player has control over y and is trying to minimise the objective f.

Let’s visualise this function to get a feel for it. The following picture shows a surface plot of f = x·y from three slightly different angles.


We can see that the surface of f = x·y is a saddle. That means, along one direction the values rise then fall, but in another direction, the values fall then rise.

The following picture shows the same function from above, using colours to indicate the values of f. Also marked are the directions of increasing gradient.


If we used our intuition to find a good solution to this adversarial game, we would probably say the best answer is the middle of that saddle at (x,y) = (0,0). At this point, if one player sets x = 0, the second player can’t affect the the value of f no matter what value of y is chosen. The same applies if y = 0, no value of x can change the value of f. The actual value of f at this point is also the best compromise. Elsewhere there are as many higher values of f as there are lower, so it seems like a good compromise.

You can explore the surface interactively yourself using the math3d.org website:



Let’s now move away from intuition and work out the answer by simulating both players using gradient descent, each trying to find a good solution for themselves.

You’ll remember from Make Your Own Neural Network that parameters are adjusted by a small amount that depends on the gradient of the objective function.


The reason we have different signs in these update rules is that y is trying to minimise f by moving down the gradient, but x is trying to maximise f by moving up the gradient. That lr is the usual learning rate.

Because we know f = x·y we can write those update rules with the gradients worked out.


We can write some code to pick starting values for x and y, and then repeatedly apply these update rules to get successive x and y values.

The following shows how x and y evolve as training progresses.


We can see that the values of x and y don’t converge, but oscillate with ever greater amplitude. Trying different starting values leads to the same behaviour. Reducing the learning rate merely delays the inevitable divergence.

This is bad. It shows that gradient descent can’t find a good solution to this simple adversarial game, and even worse, the method leads to disastrous divergence.

The following picture shows x and y plotted together. We can see the values orbit around the ideal point (0,0) but run away from it.


It can be shown mathematically (see below) that the best case scenario is that (x,y) orbits in a fixed circle around the (0,0) without getting closer to it, but this is only when the update step is infinitesimally small. As soon we have a finite step size, as we do when approximate that continuous process in discrete steps, the orbit diverges.

You can explore the code which plays this adversarial game using gradient descent here:




Gradient Descent Isn’t Ideal For Adversarial Games


We’ve shown that gradient descent fails to find a solution to an adversarial game with a very simple objective function. In fact, it doesn’t just fail to find a solution, it catastrophically diverges. In contrast, gradient descent used in the normal way to minimise a function is guaranteed to find a minimum, even if it isn’t the global minimum.

Does this mean GAN training will fail in general? No.

Realistic GANs with meaningful data will have much more complex loss functions, and that can reduce the chances of runaway divergence. That’s why GAN training throughout this book has worked fairly well. But this analysis does indicate why training GANs is hard, and can become chaotic. Orbiting around a good solution might also explain why some GANs seem to progress onto different modes of single-mode collapse with extended training rather than improving the quality of images themselves.

Fundamentally, gradient descent is the wrong approach for GANs, even if it works well enough in many cases. Finding optimisation techniques designed for adversarial dynamics like those in GANs is currently an open research question, with some researchers already publishing encouraging results.


Why A Circular Orbit?


Above we stated that (x,y) orbits as a circle when two players each use gradient descent to optimise f = x·y in opposite directions. Here we’ll do the maths to show why it is a circle.

Let’s look at the update rules again.


If we want to know how x and y evolve over time t, we can write:


If we take the second derivatives with respect to t, we get the following.


You may remember from school algebra that expressions of the form d2y/dt2 = - a2x have a solution the form y = sin(at) or y = cos(at). To satisfy the first derivatives above, we need x and y to be the following combination.


These describe (x,y) moving around a unit circle with angular speed lr.