Category: Algorithms

A/B testingAlgorithmsData Science

Christo Wilson Discusses the Ethics of Online Behavioral Experiments

If your company runs A/B tests involving it’s user community, this talk is a must see. Christo Wilson at Northeastern University discusses an analysis his lab ran on how companies use the Optimizely platform to conduct online experiments. Although these experiments tend to mostly be innocuous, there’s a tremendous need for transparency and mechanisms for accountability. How is your company addressing this?

Data for Breakfast

On May 1 of 2019, Dr. Christo Wilson gave a talk on his investigation into online behavioral experiments. The talk was based on a piper entitled Who’s the Guinea Pig? Investigating Online A/B/n Tests in-the-Wild, which he and his students gave at the 2019 ACM Conference on Fairness, Accountability, and Transparency in Atlanta, Georgia.

Online behavioral experiments (OBEs) are studies (aka A/B/n tests) that people conduct on websites to gain insight into their users’ preferences. Users typically aren’t asked for consent and these studies are typically benign. Typically an OBE will explore questions such as whether changing the background color influences how the user interacts with the site or whether the user is more likely to read an article if the font is slightly larger.

Sometimes, these studies cross ethical boundaries. For example, Facebook conducted an ethically problematic experiment designed to manipulate the emotional state of its users

View original post 948 more words

AIAlgorithms

Gödel, Incompleteness and Privacy

Avi Wigderson has a nice talk on how Kurt Gödel’s incompleteness theorems bound what can and cannot be computed (or proved) by simple programs.

In this recent post I talked about how Gödel’s theorem was used to show that for certain kinds of learning algorithms, we can’t know definitively whether the algorithm learns the right thing or not. This is kind of equivalent to saying that we can’t definitively know whether there will be a gap in the program’s learning.

The flip side of this, as Wigderson points out, is that it is probably a good thing that there are certain things that are too hard for a program to figure out. This hardness is the key to privacy — the harder it is to decipher an encrypted message, the more you can have confidence in keeping the message content private. This principle is at the core of what allows e-commerce.

Perhaps there is a way to structure our online communications or transactions so that learning our behavior — in pernicious ways — becomes impossibly hard. This might diffuse a lot of the emerging fears surrounding AI.

Wigderson makes his point — what he calls the “Unreasonable usefulness of hard problems” — about 30 minutes into the talk.

Check out the “Unreasonable usefulness of hard problems” 30 minutes in.
AIAlgorithmsMachine Learning

Gödel, Incompletness, and AI

Kurt Gödel was one of the great logicians of the 20th century. Although he passed away in 1978, his work is now impacting what we can know about today’s latest A.I. algorithms.

Gödel’s most significant contribution was probably his two Incompleteness Theorems. In essence they state that the standard machinery of mathematical reasoning are incapable of proving all of the true mathematical statements that could be formulated. A mathematician would say that that the consistency (or ability to determine which of any two contradictory statements is true) of standard set theory (a collection of axioms know as Zermelo–Fraenkel set theory) is independent of ZFC. That is, there some true things which you just can’t prove with math.

In a sense, this is like the recent U.S. Supreme Court decision on political gerrymandering. The court ruled “that partisan gerrymandering claims present political questions beyond the reach of the federal courts”. Yeah, the court stuck their heads in the sand, but ZFC just has no way to tell truth from falsity in certain cases. Gödel gives mathematical formal systems a pass.

It now looks like Gödel has rendered his ruling on machine learning.

A lot of the deep learning algorithms that enable Google translate and self driving cars work amazingly well, but there’s not a lot of theory that explains why they work so well — a lot of the advances over the past ten years amount to neural network hacking. Computer scientists are actively looking at ways of figuring out what machines can learn, and whether there are efficient algorithms for doing so. There was a recent ICML workshop devoted to the theory of deep learning and the Simons Institute is running an institute on the theoretical foundations of deep learning this summer.

However, in a recent paper entitled Learnability can be undecidable Shai Ben-David, Amir Yehudayoff, Shay Moran and colleagues showed that there is at least one generalized learning formulation which is undecidable. That is, although the particular algorithm might learn to predict effectively, you can’t prove that it will.

They looked at a particular kind of learning that in which the algorithm tries to learn a function that maximizes the expected value of some metric. The authors chose as a motivating example the task picking the ads to run on a website, given that the audience can be segmented into a finite set
of user types. Using what amounts to server logs, the learning function has to output a scoring function that says which ad to show given some information on the user. The scoring function learned has to maximize the number of ad views by looking at the results of previous views. This kind of problem obviously comes up a lot in the real world — so much so that there is a whole class of algorithms Expectation Maximization that have been developed around this framework.

One of the successes of theoretical machine learning is realizing that you can speak about a learning function in terms of a single number called the VC dimension which is roughly equivalent to the number of classes the items that you wish to classify can be broken into. They also cleverly use the fact that machine learning is equivalent to compression.

Think of it this way. If you magically could store all of the possible entries in the server log, you could just look up what previous users had done and base your decision (which ad to show) based on what the previous user had done. But chances are that since many of the users who are cyclists liked bicycle ads, you don’t need to store all of the responses for users who are cyclist to guess accurately which ad to show someone who is a cyclist. Compression amounts to successively reducing information you store (training data or features) as long as your algorithm performs acceptably.

The authors defined a compression scheme (the equivalent of a learning function) and were then able to link the compression scheme to incompleteness. They were able to show that the scheme works if and only if a particular kind of undecidable hypothesis called the continuum hypothesis is true. Since Gödel proved (well, actually developed the machinery to prove) that we can’t decide whether the continuum hypothesis is true or false, we can’t really say whether things can be learned using this method. That is, we may be able to learn an ad placer in practice, but we can’t use this particular machinery to prove that it will always find the best answer. Machine learning and A.I. are by definition intractable problems, where we mostly rely on simple algorithms to give results that are good enough — but having certainty is always good.

Although the authors caution that it is a restricted case and other formulations might lead to better results, there are some two other significant consequences I can see. First, the compression scheme they develop is precisely the same structure that are used in Generative Adversarial Networks (GANs). The GAN neural network is commonly used to generate fake faces and used in photo apps like Pikazo http://www.pikazoapp.com/. The implication of this research is that we don’t have a good way to prove that a GAN will eventually learn something useful. The second implication is that there may be no provable way from guaranteeing that popular algorithms like Expectation Maximization will avoid optimization traps. The work continues

It may be no coincidence that the Gödel Institute is in the same complex of buildings as the Vienna University AI institute.

Next door to the Gödel Institute is the Vienna AI institute

Avi Wigderson has a nice talk about the connection between Gödel’s theorems and computation. If we can’t event prove that a program will be bug free, then we shouldn’t be too surprised that we can’t prove that a program learns the right thing.

A nice talk by Avi Wigderson. Sometimes hacking is all you got.
AIAlgorithmsAtlanta

The city of Atlanta doesn’t use facial recognition — so why does Delta Airlines?

I recently made an inquiry with the City of Atlanta’s Mayor’s office as to the use of facial recognition software. I received the following reply on the Mayor’s behalf from the Atlanta Police Department

The Atlanta Police Department does not currently use nor the capability to perform facial recognition. As we do not have the capability nor sought the use of, we not have specific legislation design for or around facial recognition technology.

Delta Airlines, a company based in Atlanta, continues to promote the use of facial recognition software, and according to this wired article makes it difficult for citizens to opt out of its use.

There are several concerns with use of facial recognition technology, succinctly laid out by the Electronic Frontier Foundation:

Face recognition is a method of identifying or verifying the identity of an individual using their face. Face recognition systems can be used to identify people in photos, video, or in real-time. Law enforcement may also use mobile devices to identify people during police stops. 

But face recognition data can be prone to error, which can implicate people for crimes they haven’t committed. Facial recognition software is particularly bad at recognizing African Americans and other ethnic minorities, women, and young people, often misidentifying or failing to identify them, disparately impacting certain groups.

Additionally, face recognition has been used to target people engaging in protected speech

Electronic Frontier Foundation at https://www.eff.org/pages/face-recognition

So in other words, the technology has the potential for free assembly and privacy abuses and because the algorithms used are typically less accurate for people of color (POC), the potential abuses are multiplied.

There are on going dialogs (here is the U.S. House discussion on the impact on Civil Liberties) on when/how/if to deploy this technology.

Do me a favor? If you happen to fly Delta, or are a member of their frequent flyer programs, could you kindly ask for non-facial recognition check in? Then asking for more transparency on the use and audit of the software used would be an important step forward.

AIAlgorithmsSocial Justice

San Francisco passes facial recognition ordinance

San Francisco recently passed an ordinance controlling the use of facial recognition in the city. The ordinance was in large part thanks to the pioneering research of Joy Buolamwini.

The argument against the technology is twofold: first, the technology is highly invasive in public spaces and may constitute a direct threat to basic (US) constitutional rights of freedom of assembly; secondly the feature extraction and training set construction methodologies (for newer deep learning based models) have been shown to have racial and gender biases “baked in”. For example, the systems analyzed in Buolamwini’s work are less accurate for Black people and women — either because the data sets used for training include mostly white male faces, or the image processing algorithms focus on image components and make assumptions more common to European faces.

Consider uses in policing, where an inaccurate system mis-identifies a Black or LatinX person as a felon. Especially when there is no transparency into the use or internals of such systems, the chances for abuse and injustice are in incredible. Despite these concerns, Amazon shareholders think it is ok to release the technology on the public.

Do you know if such a system is deployed in your city? If so, are there measures to control its use, or make audits available to your community? If not, have you considered contacting your elected representatives to support or discuss appropriate safeguards?

AlgorithmsMachine Learning

You should record technical talks!

A few days ago I attended the talk “Sparsity, oracles and inference in high-dimensional statistics” by Sara van der Geer who is visiting Georgia Tech. The talk is described here.

But I didn’t record the talk! I had a working iPhone! I only have an after thought photo of the white board that remained after the lecture

IMG_6987

Just focus on lambda!

Phones are ubiquitous and there’s nothing like a short clip that can distill some of the essence of an idea, a lecture. Maybe it’s all those “No recording devices, please!” announcements at concerts, or that my videography skills are in need of serious help.

PSA: If you think that someone is bring across some important knowledge, record it — give them their attribution, don’t steal their stuff — but you are sharing knowledge with the world!

So what was the talk about? If you do machine learning, the idea of regularization is probably familiar. L1 regularization a.k.a Least Absolute Shrinkage and Selection Operator ak.a. lasso in particular assigns a penalty on the absolute value of the predictor weights. It’s an technique that reduces the tendency to overfit to the training data. There’s a whole book on it called Statistical Learning with Sparsity that you can download for free!

The amazing thing about lasso is that it also drives the less extraneous parameters close to zero: it can reduce the number of parameters you need in your model, or it results in a model that is more sparse (that is, just remove the close-to-zero parameters from the model). This can make the model faster to compute.

The main things I picked up were that there are some bounds on the error for lasso regularization that can be expressed in terms of the number of parameters and the number of observations you have in your training set. The error should be within a constant of \sqrt{s_{o} log(p)/n} , where I believe that s_{0} is your guess about the smallest non-sparse weight. You also get a similar expression for a good starting value for the penalty \lambda >> \sqrt{ log(p)/n} . The p is the number of parameters in your model, and n the number of observations you are training with. Scikit-learn or your favorite machine learning library probably comes with the lasso, but it doesn’t look like the bound results are baked in.

She introduced something called the compatibility constant that’s discussed further in a couple of papers [Belloni, et. a. 2014, Dalalyan 2017]. She also discussed how lasso behaves when you assume that you have noisy observations. The final lecture is September 6th at Georgia Tech on applications to inference.

Wouldn’t it have been better if I’d just recorded it though??

AlgorithmsData Science

The science of data science

The Foundations of Data Science Boot Camp given last week (August 27 –  31) at the Simons Institute in Berkeley explored how pure mathematics and theoretical computer science are providing actionable insights that the working data scientist can use — or at least ponder.

I found the talk below by Ravi Kannan useful in pointing out how dimensionality reduction techniques like SVD can be used to set clustering up for success. When dealing with immense data sets, this can be the difference between useful or garbage clusters.

I also thought that David P. Woodruff‘s lecture on a dimensionality reduction technique called sketching was impressive for its clarity. As a data scientist or analysis, you’re often in a dilemma when your Impala cluster runs out of memory for that critical model build — you may just have to sample from that terabyte pile of web pages. It is good to know that you have some math magic behind you when the time comes.

Santosh Vempala thinks the seminar was a better value than Netflix. I’m not sure about that, but those were some good lectures.