All posts by charlescearl

Data scientist at Automattic.com.

AISocial Justice

Which cities use facial recognition?

San Francisco famously banned the use of facial recognition by police and other municipal authorities on May 14th of this year. Citizens in Detroit angered by the use of facial recognition in Project Green Light forced a moratorium on its use. Although Orlando has halted for an immediate deployment, a trial is being conducted involving police officers only. According to the Natalie Bednarz, the Digital Communications Supervisor in the Orlando office of Communications and Neighborhood Relations

if the City of Orlando Police Department decides to ultimately implement official use of the technology, City staff would explore procurement and develop a policy governing the technology

Email communication from the Orlando office of Communications and Neighborhood Relations

This report by Georgetown Law School reports that Chicago uses facial recognition in policing and throughout its mass transit systems.

Beyond surveillance cameras, several cities have been forced by ICE to turn over drivers license photos to ICE’s facial recognition software to identify persons who are not U.S. citizens. Not only is facial recognition software notoriously bad at identifying faces of African Americans, but systems score poorly in identifying people who identify ethnically as Latinx.

The Georgetown Law School in 2016 put together a list of city and state governments across the U.S. that use facial recognition.

Should facial recognition be banned altogether in policing?

AIData Scienceinclusion

Black In AI workshop call for papers

If you are a student, researcher, or professor at a Historically Black College or University and work actively in data science, machine learning, or artificial intelligence, please consider submitting a paper to the 2019 Black in AI workshop. The deadline is now August 7 — I’d encourage submission even (especially!!) if your research and ideas are still coming together. There are also travel grants available and I’ll post that application soon.

The workshop occurs during the 2019 neurlps conference (this is probably the most attending conference on deep learning and other AI architectures). The specific goal of the workshop is to encourage involvement of people from Africa and the African diaspora in the AI field, and to promote research that benefits (and does no harm to) the global Black community.

Paper submission extended deadline: Tue August 7, 2019 11:00 PM UTC

Submit at: https://cmt3.research.microsoft.com/BLACKINAI2019

The site will start accepting submissions on July 7th.

No extensions will be offered for submissions.

We invite submissions for the Third Black in AI Workshop (co-located with NeurIPS). We welcome research work in artificial intelligence, computational neuroscience, and its applications. These include, but are not limited to, deep learning,  knowledge reasoning, machine learning, multi-agent systems, statistical reasoning, theory, computer vision, natural language processing, robotics, as well as applications of AI to other domains such as health and education, and submissions concerning fairness, ethics, and transparency in AI. 

Papers may introduce new theory, methodology, applications or product demonstrations. 

We also welcome position papers that synthesize existing work, identify future directions, or inform on neglected/abandoned areas where AI could be impactful. Examples are work on AI & Arts, AI & Policy, etc.

Submission will fall into one of these 4 tracks:

  1. Machine learning Algorithms
  2. Applications of AI 
  3. Position papers
  4. Product demonstrations

Work may be previously published, completed, or ongoing. The workshop will not publish proceedings. We encourage all Black researchers in areas related to AI to submit their work. They need not to be first author of the work.

Formatting instructions

All submissions must be in PDF format. Submissions are limited to two content pages, including all figures and tables. An additional page containing only references is allowed. Submissions should be in a single column, typeset using 11-point or larger fonts and have at least 1-inch margin all around. Submissions that do not follow these guidelines risk being rejected without consideration of their merits. 

Double-blinded reviews

Submissions will be peer-reviewed by at least 2 reviewers, in addition to an area chair. The reviewing process will be double-blinded at the level of the reviewers. As an author, you are responsible for anonymizing your submission. In particular, you should not include author names, author affiliations, or acknowledgements in your submission and you should avoid providing any other identifying information.

Travel grants

Use this link to apply for travel grants to the conference. They are available for eligible attendees, and should be submitted by  Wed July 31, 2019 11:00 PM UTC at the latest (Note that this is one day after the paper submission deadline).

Content guidelines

Submissions must state the research problem, motivation, and contribution. Submissions must be self-contained and include all figures, tables, and references. 

Here are a set of good sample papers from 2017: sample papers 

Questions? Contact us at bai2019@blackinai.org.

AIAlgorithmsMachine Learning

Gödel, Incompletness, and AI

Kurt Gödel was one of the great logicians of the 20th century. Although he passed away in 1978, his work is now impacting what we can know about today’s latest A.I. algorithms.

Gödel’s most significant contribution was probably his two Incompleteness Theorems. In essence they state that the standard machinery of mathematical reasoning are incapable of proving all of the true mathematical statements that could be formulated. A mathematician would say that that the consistency (or ability to determine which of any two contradictory statements is true) of standard set theory (a collection of axioms know as Zermelo–Fraenkel set theory) is independent of ZFC. That is, there some true things which you just can’t prove with math.

In a sense, this is like the recent U.S. Supreme Court decision on political gerrymandering. The court ruled “that partisan gerrymandering claims present political questions beyond the reach of the federal courts”. Yeah, the court stuck their heads in the sand, but ZFC just has no way to tell truth from falsity in certain cases. Gödel gives mathematical formal systems a pass.

It now looks like Gödel has rendered his ruling on machine learning.

A lot of the deep learning algorithms that enable Google translate and self driving cars work amazingly well, but there’s not a lot of theory that explains why they work so well — a lot of the advances over the past ten years amount to neural network hacking. Computer scientists are actively looking at ways of figuring out what machines can learn, and whether there are efficient algorithms for doing so. There was a recent ICML workshop devoted to the theory of deep learning and the Simons Institute is running an institute on the theoretical foundations of deep learning this summer.

However, in a recent paper entitled Learnability can be undecidable Shai Ben-David, Amir Yehudayoff, Shay Moran and colleagues showed that there is at least one generalized learning formulation which is undecidable. That is, although the particular algorithm might learn to predict effectively, you can’t prove that it will.

They looked at a particular kind of learning that in which the algorithm tries to learn a function that maximizes the expected value of some metric. The authors chose as a motivating example the task picking the ads to run on a website, given that the audience can be segmented into a finite set
of user types. Using what amounts to server logs, the learning function has to output a scoring function that says which ad to show given some information on the user. The scoring function learned has to maximize the number of ad views by looking at the results of previous views. This kind of problem obviously comes up a lot in the real world — so much so that there is a whole class of algorithms Expectation Maximization that have been developed around this framework.

One of the successes of theoretical machine learning is realizing that you can speak about a learning function in terms of a single number called the VC dimension which is roughly equivalent to the number of classes the items that you wish to classify can be broken into. They also cleverly use the fact that machine learning is equivalent to compression.

Think of it this way. If you magically could store all of the possible entries in the server log, you could just look up what previous users had done and base your decision (which ad to show) based on what the previous user had done. But chances are that since many of the users who are cyclists liked bicycle ads, you don’t need to store all of the responses for users who are cyclist to guess accurately which ad to show someone who is a cyclist. Compression amounts to successively reducing information you store (training data or features) as long as your algorithm performs acceptably.

The authors defined a compression scheme (the equivalent of a learning function) and were then able to link the compression scheme to incompleteness. They were able to show that the scheme works if and only if a particular kind of undecidable hypothesis called the continuum hypothesis is true. Since Gödel proved (well, actually developed the machinery to prove) that we can’t decide whether the continuum hypothesis is true or false, we can’t really say whether things can be learned using this method. That is, we may be able to learn an ad placer in practice, but we can’t use this particular machinery to prove that it will always find the best answer. Machine learning and A.I. are by definition intractable problems, where we mostly rely on simple algorithms to give results that are good enough — but having certainty is always good.

Although the authors caution that it is a restricted case and other formulations might lead to better results, there are some two other significant consequences I can see. First, the compression scheme they develop is precisely the same structure that are used in Generative Adversarial Networks (GANs). The GAN neural network is commonly used to generate fake faces and used in photo apps like Pikazo http://www.pikazoapp.com/. The implication of this research is that we don’t have a good way to prove that a GAN will eventually learn something useful. The second implication is that there may be no provable way from guaranteeing that popular algorithms like Expectation Maximization will avoid optimization traps. The work continues

It may be no coincidence that the Gödel Institute is in the same complex of buildings as the Vienna University AI institute.

Next door to the Gödel Institute is the Vienna AI institute

Avi Wigderson has a nice talk about the connection between Gödel’s theorems and computation. If we can’t event prove that a program will be bug free, then we shouldn’t be too surprised that we can’t prove that a program learns the right thing.

A nice talk by Avi Wigderson. Sometimes hacking is all you got.
BooksData ScienceHistorically Black Colleges

Black data science book giveaway

The Atlanta University Center Consortium — the umbrella organization of Morehouse, Spelman, Clark Atlanta University, and Morehouse School of Medicine — just launched a Data Science Initiative. To celebrate, I am giving away two books!

Here’s an excerpt from the announcement:

The AUCC Data Science Initiative brings together the collective talents and innovation of computer science professors from Morehouse College and other AUCC campuses into an academic program that will be the first of its kind for our students,” said David A. Thomas, president of Morehouse College. “Our campuses will soon produce hundreds of students annually who will be well-equipped to compete internationally for lucrative jobs in data science. This effort, thanks to UnitedHealth Group’s generous donation, is an example of the excellence that results when we come together as a community to address national issues such as the disparity among minorities working in STEM.

Announcement of the Atlanta University Center data science initiative at http://d4bl.org/conference.html

To commemorate and honor the founding of this initiative, I’ve set up two book giveaways at Amazon. The first book is W. E. B. Du Bois’s Data Portraits: Visualizing Black America. W.E.B. DuBois was a sociologist who taught at the Atlanta University Center. His visualizations of African American life in the early 20th century still set the standard for data visualization and this book is a collection of visualizations that he and his Atlanta University students produced for the 1900 Paris Exposition. If Atlanta University students were doing amazing data science 100 years ago without laptops, we can only guess what the future holds. Click this link to get your book.

The second book is Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life by Dr. Ruha Benjamin, a contemporary African American scholar at Princeton whose work addresses “the social dimensions of science, technology, and medicine”. Click this link to get a copy of Captivating Technology.

There is only one copy per book available so the first person to click gets the book.

If you want to know more about the work being done by Black data scientists, you should check out the DATA FOR BLACK LIVES III conference.

I’ll close with one of the sessions from the first Data for Black Lives conference. Where are the Black (data) scientists? Definitely at the Atlanta University Center!

History

This Fourth of July is yours, not mine

I quote from Frederick Douglass‘s speech of July 5, 1852. In that day, freed Africans in America feared being out on July 4 as lynchings usually spiked then. Those Black folk that cared to celebrated on the 5th.

The featured image is of a map of the indigenous peoples of the U.S. made by Aaron Carapella over at Tribal Nations Maps. I’ll gladly send you a $25 map if you submit a comment on this post or make a contribution to Aaron’s Go Fund Me. Offer is to the first poster 🙂

The original keepers of this land are still fighting to protect their identity, land, and existence. In the last few years, we have witnessed the removal of the basic voting protections accorded by the Civil Rights Act of 1964. It has been argued that this move has resulted in voter suppression and other actions to disenfranchise (again) African Americans and other marginalized groups. Most recently with the Supreme Court ruling that gerrymandering, even when it plainly dilutes the vote of marginalized communities, is ok and is in fact beyond the purview of the court. We are now witnessing a human rights crisis at the U.S. southern border in which internment camps for asylum seekers subject indigenous and LatinX men, women, and children to unlivable conditions — 24 persons having died in these facilities since the current administration took office.

All of these abuses and more call into question the vision of the U.S. that we are celebrating. There has always it feels been a tension between two visions. One is that of a republic that welcomes all and enables all to live the life they wish to their potential — in that vision, the language, religion, color, gender identity, physical ability of an individual are all strengths and part of the fabric that enables a unique society to flourish. The other vision is that of melting pot, a country for and by white Christian men. These are extreme caricatures, but you need only contrast Martin Luther King’s “I Have a Dream” vision with tweets of the current administration.

It is hard to make sense of these polarities, but thinking it through, wrestling with this through reading, discussion, reflection, is essential to the existence of the country. As a start, I would challenge readers to take on the book Stamped from the Beginning by Ibram Kendi.

I sometimes wonder what staying in the British orbit would have meant for the people of color now living in the U.S. Would it just have meant another Canada? Canada, aside from being cold, isn’t so bad a place — a functional democracy. Britain ended slavery in 1833 with the Slavery Abolition Act. The U.K. has for several decades provided subsidized health care and education for all its citizens. On the other (bloody) hand, both Canada and Great Britain continue to grapple with the genocides of indigenous peoples. India, Ghana, Kenya, South Africa — the list of nations still dealing with the scars and trauma of racialized British imperialism spans the globe. Africa is still shedding the anti-LGBTQ legacy of British rule.

I think it’s a better mental effort to think through what living in government and societies created in concert with the protectors could have looked like, and what it could still be. Here’s a zoom in on the area of what would have been the United States as of 1783 (the year the U.S. actually came into existence).

AIAlgorithmsAtlanta

The city of Atlanta doesn’t use facial recognition — so why does Delta Airlines?

I recently made an inquiry with the City of Atlanta’s Mayor’s office as to the use of facial recognition software. I received the following reply on the Mayor’s behalf from the Atlanta Police Department

The Atlanta Police Department does not currently use nor the capability to perform facial recognition. As we do not have the capability nor sought the use of, we not have specific legislation design for or around facial recognition technology.

Delta Airlines, a company based in Atlanta, continues to promote the use of facial recognition software, and according to this wired article makes it difficult for citizens to opt out of its use.

There are several concerns with use of facial recognition technology, succinctly laid out by the Electronic Frontier Foundation:

Face recognition is a method of identifying or verifying the identity of an individual using their face. Face recognition systems can be used to identify people in photos, video, or in real-time. Law enforcement may also use mobile devices to identify people during police stops. 

But face recognition data can be prone to error, which can implicate people for crimes they haven’t committed. Facial recognition software is particularly bad at recognizing African Americans and other ethnic minorities, women, and young people, often misidentifying or failing to identify them, disparately impacting certain groups.

Additionally, face recognition has been used to target people engaging in protected speech

Electronic Frontier Foundation at https://www.eff.org/pages/face-recognition

So in other words, the technology has the potential for free assembly and privacy abuses and because the algorithms used are typically less accurate for people of color (POC), the potential abuses are multiplied.

There are on going dialogs (here is the U.S. House discussion on the impact on Civil Liberties) on when/how/if to deploy this technology.

Do me a favor? If you happen to fly Delta, or are a member of their frequent flyer programs, could you kindly ask for non-facial recognition check in? Then asking for more transparency on the use and audit of the software used would be an important step forward.