In early December, I attended the Black In AI workshop (BAI), part of the NeurIPS AI conference held in Vancouver.
Timnet Gebru and Rediet Abebe founded BAI three years ago to address the near complete lack of Black and African voices at NeurIPS and other AI conferences.
Over that period, the organization has had a tremendous impact: participation has grown to several hundred attendees, it has spawned affiliated conferences like Deep Learning Indaba , it was instrumental in bringing the Eighth International Conference on Learning Representations to Ethiopia in 2020, and it has initiated a range of mentoring and training efforts across the African diaspora.
I spent few hours this year participating as an organizer (some coordination of remote presenters and travel grants). The talks were streamed and recorded here.
There is a lot that I learned by participating and it was an honor to work with the brilliant people who made the conference happen — I wanted to share some of what I’ve been able to think through in hopes that there might be some nuggets of value.
The interesting stuff happens at the margins
When I first started in AI, it was an area that existed on the margin of computer science. Neural networks were on the margin of that margin. I think that there is a lot of freedom and creativity that comes when one is open to just think and experiment — there is also the pressure of proving the viability of your position. You can find real innovation being birthed if you look carefully. When you hear talks put all your assumptions into question, then you know that you’ve probably arrived at the right place.
What I found then at Black In AI was a lot of work questioning basic assumptions of a field which has moved from the margin to the spotlight (literally half of the commercial booths at the NeurIPS were hedge funds).
There are three talks (among many ) that stood out for me in this respect.
Abeba Birhane: Rethinking the Ethical Foundations of AI
I had the privilege of hearing Abeba Birhane who was deservedly awarded the Best Paper.
There is a lot of work on bias in machine learning models — for example Assessing Social and Intersectional Biases in Contextualized Word Representations was presented a few days after Birhane’s talk. A lot of the “solutions” in the fairness literature focus on de-biasing of the training and inference process. But Birhane’s talk called into question the point of de biasing algorithms, probing the intent of these algorithms. Is the point to present decision processes that are unfair as fair? Is the point really to reify structural oppression — to put lipstick on a pig (to borrow the title of one paper) ? She is searching for the voices of the marginalized in artificial intelligence and machine learning.
To take a concrete example, many companies are using the text.io app to rewrite job descriptions to have less gender bias. But maybe identifying the bias is really more an indicator of internal structural patterns of oppression? But how do you get companies to address the internal gender issues that give rise to these biased job descriptions to begin with?
Her talks are recorded. Relational Ethics, starts at 20:30 into the presentation. Her talk at ML for the Developing World: Challenges and Risks starts 38:00 in. There is an accompanying blog post .
Matthew Kinney — Defending Black Twitter from Deepfakes
There was Matthew Kinney’s talk “Creative Red Teaming: Approaches to Addressing Bias and Misuse in Machine Learning” — an approach using deep learning to safeguard internet platforms from misinformation campaigns.
Kinney began looking at the Internet Research Agency‘s disinformation effort when it became apparent that Black Twitter was being targeted as part of voter suppression efforts. Since BAI, we’ve seen similar campaigns launched in support of India’s Citizen Amendment Act and other repressive efforts — these campaigns are likely to be a constant this year, making Kinney’s work all the more critical.
Less you think that the disinformation campaigns are just about the use of video manipulation, Kinney makes the point that misinformation based upon text generators like Open AI’s GPT-2 can be more harmful.
Sara Menker: Data Science for Agriculture
One of the other impactful talks was by Sarah Menker, CEO of Gro Intelligence — a company that does agricultural analytics. I was interested in how the data science team in particular manages rapid response to develop models in response to rapidly changing weather and farming conditions and also how they deal with a team that is split across Kenya and New York.
Sarah Menker’s talk starts 1:48 into the video.
There were a number oral presentations at BAI are around speech and language processing — particularly the development of technology to support support Amharic, Tigre, Yoruba, and other African languages. I spoke with the founder of a small startup Latan who is working on Tigre translation. Healthcare and agriculture applications featured prominently.
A number of presenters were not able to make it, mostly due to visa issues (details of this below). The diversity of their talks are indicative of the richness of the research community. Here’s a recording of Simba Nyatsanga’s talk on automatic video captioning
You can access the Black In AI 2019 Youtube channel to view the others.
One of the many issues that Black In AI has tackled was transportation exclusion. Many researchers from from Africa, South America, and the Caribbean lack either the institutional or personal resources that would enable a trip to Canada (or other destinations where computing conferences are frequently held). A large part of BAI’s fund raising effort is about putting the resources together to bridge that gap — travel grants for presenters and other attendees also provide airfare and lodging. This makes BAI one of the most economically inclusive workshops.
All that said, an on-going challenge down to the last minute was getting presenters to the conference.
We had nearly 40 presenters denied visas right off. Most of these were reversed once senior IRCC officials reviewed the applications, but for many, it came too late, in some cases the day that the conference was to start. In large part, denials and subsequent reversals seemed to hinge on a political calculus. Senior officials only became involved after pressure from Wired and BBC articles and members of the House of Commons, and various high profile AI researchers.
My analysis is that Canada wants itself perceived as an inclusive country with a progressive visa policy and is planning on building AI as a growth industry. Although these values may not be shared by individual in-consular staff, or maybe even the AI programs used for visa screening. This isn’t much the case in the U.S., where policies are in open opposition to fair visa access to persons from Africa, Islamic countries, and other locations outside of Europe, the U.S, Canada, and Australia.
Despite the reversals, there were other unexpected visa conundrums. Several participants flying through South Africa had to be provided with alternate tickets to deal with not having transit visas for Hong Kong. Several Nigerian presenters were price gouged by Turkish Airlines when trying to get on their flights. That is, they were presented with additional substantial visa fees at the gate. The complaints of stemming from these policies resulted in last week’s suspension of Turkish Airlines in Nigeria. Conference organizers had to scramble to find alternate flights home for those who flew on Turkish Airlines. I give these anecdotes only to highlight the immense privileges that those of us in the U.S., EU, and Canada enjoy in having relatively open and worry free travel.
Planning Distributed conferences
Pulling off the Black in AI workshop itself was the epitome of a distributed team in action. As we began dealing with the problem of managing visa rejections in Brazil and Nigeria, or just managing hotel payments and livestreams highlighted the need for coordination and process. There is a lot of process knowledge that I feel is unique to making such a trans-national, inclusive (language, gender identity, diverging racial categorizations) work. I wondered about the best ways to capture and curate knowledge.
On Having Allies
I was encouraged to see individuals come together in sincere, and supportive ways to bring about a wider view of what global collaboration could be. The coordinated effort by people in Women In AI and LatinX in AI was amazing. The tireless, round the clock efforts by those both famous and invisible, the commitments to encouraging and supporting the emergence of new scholars, developers, artists, thinkers was uplifting in spite of so many other causes for concern. I don’t doubt that there is an AI bubble, or that in a few years the generative networks and transformers will be pedestrian as rice cookers or smoke alarms — less AI than just another kind of device or program. What I think is that getting people together from across the globe, really from across the globe — from across the economic and gender and racial divides — is really how important and unimagined change happens.
One thought on “Notes from the Black In AI 2019 Workshop”
Thanks so much for these recollections. Wonderful read after all these while.