Gödel, Incompleteness and Privacy

Avi Wigderson has a nice talk on how Kurt Gödel’s incompleteness theorems bound what can and cannot be computed (or proved) by simple programs.

In this recent post I talked about how Gödel’s theorem was used to show that for certain kinds of learning algorithms, we can’t know definitively whether the algorithm learns the right thing or not. This is kind of equivalent to saying that we can’t definitively know whether there will be a gap in the program’s learning.

The flip side of this, as Wigderson points out, is that it is probably a good thing that there are certain things that are too hard for a program to figure out. This hardness is the key to privacy — the harder it is to decipher an encrypted message, the more you can have confidence in keeping the message content private. This principle is at the core of what allows e-commerce.

Perhaps there is a way to structure our online communications or transactions so that learning our behavior — in pernicious ways — becomes impossibly hard. This might diffuse a lot of the emerging fears surrounding AI.

Wigderson makes his point — what he calls the “Unreasonable usefulness of hard problems” — about 30 minutes into the talk.

Check out the “Unreasonable usefulness of hard problems” 30 minutes in.

One thought on “Gödel, Incompleteness and Privacy

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.