Kurt Gödel was one of the great logicians of the 20th century. Although he passed away in 1978, his work is now impacting what we can know about today’s latest A.I. algorithms.
Gödel’s most significant contribution was probably his two Incompleteness Theorems. In essence they state that the standard machinery of mathematical reasoning are incapable of proving all of the true mathematical statements that could be formulated. A mathematician would say that that the consistency (or ability to determine which of any two contradictory statements is true) of standard set theory (a collection of axioms know as Zermelo–Fraenkel set theory) is independent of ZFC. That is, there some true things which you just can’t prove with math.
In a sense, this is like the recent U.S. Supreme Court decision on political gerrymandering. The court ruled “that partisan gerrymandering claims present political questions beyond the reach of the federal courts”. Yeah, the court stuck their heads in the sand, but ZFC just has no way to tell truth from falsity in certain cases. Gödel gives mathematical formal systems a pass.
It now looks like Gödel has rendered his ruling on machine learning.
A lot of the deep learning algorithms that enable Google translate and self driving cars work amazingly well, but there’s not a lot of theory that explains why they work so well — a lot of the advances over the past ten years amount to neural network hacking. Computer scientists are actively looking at ways of figuring out what machines can learn, and whether there are efficient algorithms for doing so. There was a recent ICML workshop devoted to the theory of deep learning and the Simons Institute is running an institute on the theoretical foundations of deep learning this summer.
However, in a recent paper entitled Learnability can be undecidable Shai Ben-David, Amir Yehudayoff, Shay Moran and colleagues showed that there is at least one generalized learning formulation which is undecidable. That is, although the particular algorithm might learn to predict effectively, you can’t prove that it will.
They looked at a particular kind of learning that in which the algorithm tries to learn a function that maximizes the expected value of some metric. The authors chose as a motivating example the task picking the ads to run on a website, given that the audience can be segmented into a finite set
of user types. Using what amounts to server logs, the learning function has to output a scoring function that says which ad to show given some information on the user. The scoring function learned has to maximize the number of ad views by looking at the results of previous views. This kind of problem obviously comes up a lot in the real world — so much so that there is a whole class of algorithms Expectation Maximization that have been developed around this framework.
One of the successes of theoretical machine learning is realizing that you can speak about a learning function in terms of a single number called the VC dimension which is roughly equivalent to the number of classes the items that you wish to classify can be broken into. They also cleverly use the fact that machine learning is equivalent to compression.
Think of it this way. If you magically could store all of the possible entries in the server log, you could just look up what previous users had done and base your decision (which ad to show) based on what the previous user had done. But chances are that since many of the users who are cyclists liked bicycle ads, you don’t need to store all of the responses for users who are cyclist to guess accurately which ad to show someone who is a cyclist. Compression amounts to successively reducing information you store (training data or features) as long as your algorithm performs acceptably.
The authors defined a compression scheme (the equivalent of a learning function) and were then able to link the compression scheme to incompleteness. They were able to show that the scheme works if and only if a particular kind of undecidable hypothesis called the continuum hypothesis is true. Since Gödel proved (well, actually developed the machinery to prove) that we can’t decide whether the continuum hypothesis is true or false, we can’t really say whether things can be learned using this method. That is, we may be able to learn an ad placer in practice, but we can’t use this particular machinery to prove that it will always find the best answer. Machine learning and A.I. are by definition intractable problems, where we mostly rely on simple algorithms to give results that are good enough — but having certainty is always good.
Although the authors caution that it is a restricted case and other formulations might lead to better results, there are some two other significant consequences I can see. First, the compression scheme they develop is precisely the same structure that are used in Generative Adversarial Networks (GANs). The GAN neural network is commonly used to generate fake faces and used in photo apps like Pikazo http://www.pikazoapp.com/. The implication of this research is that we don’t have a good way to prove that a GAN will eventually learn something useful. The second implication is that there may be no provable way from guaranteeing that popular algorithms like Expectation Maximization will avoid optimization traps. The work continues
It may be no coincidence that the Gödel Institute is in the same complex of buildings as the Vienna University AI institute.

Avi Wigderson has a nice talk about the connection between Gödel’s theorems and computation. If we can’t event prove that a program will be bug free, then we shouldn’t be too surprised that we can’t prove that a program learns the right thing.
One thought on “Gödel, Incompletness, and AI”