Tag: Machine Learning

AIConferencesinclusion

Notes from the Black In AI 2019 Workshop

In early December, I attended the Black In AI workshop (BAI), part of the NeurIPS AI conference held in Vancouver.

Timnet Gebru and Rediet Abebe founded BAI three years ago to address the near complete lack of Black and African voices at NeurIPS and other AI conferences.

Over that period, the organization has had a tremendous impact: participation has grown to several hundred attendees, it has spawned affiliated conferences like Deep Learning Indaba , it was instrumental in bringing the Eighth International Conference on Learning Representations to Ethiopia in 2020, and it has initiated a range of mentoring and training efforts across the African diaspora.

I spent few hours this year participating as an organizer (some coordination of remote presenters and travel grants). The talks were streamed and recorded here.

There is a lot that I learned by participating and it was an honor to work with the brilliant people who made the conference happen — I wanted to share some of what I’ve been able to think through in hopes that there might be some nuggets of value.

The interesting stuff happens at the margins

When I first started in AI, it was an area that existed on the margin of computer science. Neural networks were on the margin of that margin. I think that there is a lot of freedom and creativity that comes when one is open to just think and experiment — there is also the pressure of proving the viability of your position. You can find real innovation being birthed if you look carefully. When you hear talks put all your assumptions into question, then you know that you’ve probably arrived at the right place.

What I found then at Black In AI was a lot of work questioning basic assumptions of a field which has moved from the margin to the spotlight (literally half of the commercial booths at the NeurIPS were hedge funds).

There are three talks (among many ) that stood out for me in this respect.

Abeba Birhane: Rethinking the Ethical Foundations of AI

I had the privilege of hearing Abeba Birhane who was deservedly awarded the Best Paper.

There is a lot of work on bias in machine learning models — for example Assessing Social and Intersectional Biases in Contextualized Word Representations was presented a few days after Birhane’s talk. A lot of the “solutions” in the fairness literature focus on de-biasing of the training and inference process. But Birhane’s talk called into question the point of de biasing algorithms, probing the intent of these algorithms. Is the point to present decision processes that are unfair as fair? Is the point really to reify structural oppression — to put lipstick on a pig (to borrow the title of one paper) ? She is searching for the voices of the marginalized in artificial intelligence and machine learning.

To take a concrete example, many companies are using the text.io app to rewrite job descriptions to have less gender bias. But maybe identifying the bias is really more an indicator of internal structural patterns of oppression? But how do you get companies to address the internal gender issues that give rise to these biased job descriptions to begin with?

abeba

Her talks are recorded. Relational Ethics, starts at 20:30 into the presentation. Her talk at ML for the Developing World: Challenges and Risks starts 38:00 in. There is an accompanying blog post .

Matthew Kinney — Defending Black Twitter from Deepfakes

There was Matthew Kinney’s talk “Creative Red Teaming: Approaches to Addressing Bias and Misuse in Machine Learning” — an approach using deep learning to safeguard internet platforms from misinformation campaigns.

Kinney began looking at the Internet Research Agency‘s disinformation effort when it became apparent that Black Twitter was being targeted as part of voter suppression efforts. Since BAI, we’ve seen similar campaigns launched in support of India’s Citizen Amendment Act and other repressive efforts — these campaigns are likely to be a constant this year, making Kinney’s work all the more critical.

Less you think that the disinformation campaigns are just about the use of video manipulation, Kinney makes the point that misinformation based upon text generators like Open AI’s GPT-2 can be more harmful.

matthew

Sara Menker: Data Science for Agriculture

One of the other impactful talks was by Sarah Menker, CEO of Gro Intelligence — a company that does agricultural analytics. I was interested in how the data science team in particular manages rapid response to develop models in response to rapidly changing weather and farming conditions and also how they deal with a team that is split across Kenya and New York.

Sarah Menker’s talk starts 1:48 into the video.

Prominent themes

There were a number oral presentations at BAI are around speech and language processing — particularly the development of technology to support support Amharic, Tigre, Yoruba, and other African languages. I spoke with the founder of a small startup Latan who is working on Tigre translation. Healthcare and agriculture applications featured prominently.

malaria

Remote Presentations

A number of presenters were not able to make it, mostly due to visa issues (details of this below). The diversity of their talks are indicative of the richness of the research community. Here’s a recording of Simba Nyatsanga’s talk on automatic video captioning

You can access the Black In AI 2019 Youtube channel to view the others.

Visa Privilege

One of the many issues that Black In AI has tackled was transportation exclusion. Many researchers from from Africa, South America, and the Caribbean lack either the institutional or personal resources that would enable a trip to Canada (or other destinations where computing conferences are frequently held). A large part of BAI’s fund raising effort is about putting the resources together to bridge that gap — travel grants for presenters and other attendees also provide airfare and lodging. This makes BAI one of the most economically inclusive workshops.

All that said, an on-going challenge down to the last minute was getting presenters to the conference.

We had nearly 40 presenters denied visas right off. Most of these were reversed once senior IRCC officials reviewed the applications, but for many, it came too late, in some cases the day that the conference was to start. In large part, denials and subsequent reversals seemed to hinge on a political calculus. Senior officials only became involved after pressure from Wired and BBC articles and members of the House of Commons, and various high profile AI researchers.

My analysis is that Canada wants itself perceived as an inclusive country with a progressive visa policy and is planning on building AI as a growth industry. Although these values may not be shared by individual in-consular staff, or maybe even the AI programs used for visa screening. This isn’t much the case in the U.S., where policies are in open opposition to fair visa access to persons from Africa, Islamic countries, and other locations outside of Europe, the U.S, Canada, and Australia.

Despite the reversals, there were other unexpected visa conundrums. Several participants flying through South Africa had to be provided with alternate tickets to deal with not having transit visas for Hong Kong. Several Nigerian presenters were price gouged by Turkish Airlines when trying to get on their flights. That is, they were presented with additional substantial visa fees at the gate. The complaints of stemming from these policies resulted in last week’s suspension of Turkish Airlines in Nigeria. Conference organizers had to scramble to find alternate flights home for those who flew on Turkish Airlines. I give these anecdotes only to highlight the immense privileges that those of us in the U.S., EU, and Canada enjoy in having relatively open and worry free travel.

Planning Distributed conferences

Pulling off the Black in AI workshop itself was the epitome of a distributed team in action. As we began dealing with the problem of managing visa rejections in Brazil and Nigeria, or just managing hotel payments and livestreams highlighted the need for coordination and process. There is a lot of process knowledge that I feel is unique to making such a trans-national, inclusive (language, gender identity, diverging racial categorizations) work. I wondered about the best ways to capture and curate knowledge.

On Having Allies

I was encouraged to see individuals come together in sincere, and supportive ways to bring about a wider view of what global collaboration could be. The coordinated effort by people in Women In AI and LatinX in AI was amazing. The tireless, round the clock efforts by those both famous and invisible, the commitments to encouraging and supporting the emergence of new scholars, developers, artists, thinkers was uplifting in spite of so many other causes for concern. I don’t doubt that there is an AI bubble, or that in a few years the generative networks and transformers will be pedestrian as rice cookers or smoke alarms — less AI than just another kind of device or program. What I think is that getting people together from across the globe, really from across the globe — from across the economic and gender and racial divides — is really how important and unimagined change happens.

AIData Scienceinclusion

Black In AI workshop call for papers

If you are a student, researcher, or professor at a Historically Black College or University and work actively in data science, machine learning, or artificial intelligence, please consider submitting a paper to the 2019 Black in AI workshop. The deadline is now August 7 — I’d encourage submission even (especially!!) if your research and ideas are still coming together. There are also travel grants available and I’ll post that application soon.

The workshop occurs during the 2019 neurlps conference (this is probably the most attending conference on deep learning and other AI architectures). The specific goal of the workshop is to encourage involvement of people from Africa and the African diaspora in the AI field, and to promote research that benefits (and does no harm to) the global Black community.

Paper submission extended deadline: Tue August 7, 2019 11:00 PM UTC

Submit at: https://cmt3.research.microsoft.com/BLACKINAI2019

The site will start accepting submissions on July 7th.

No extensions will be offered for submissions.

We invite submissions for the Third Black in AI Workshop (co-located with NeurIPS). We welcome research work in artificial intelligence, computational neuroscience, and its applications. These include, but are not limited to, deep learning,  knowledge reasoning, machine learning, multi-agent systems, statistical reasoning, theory, computer vision, natural language processing, robotics, as well as applications of AI to other domains such as health and education, and submissions concerning fairness, ethics, and transparency in AI. 

Papers may introduce new theory, methodology, applications or product demonstrations. 

We also welcome position papers that synthesize existing work, identify future directions, or inform on neglected/abandoned areas where AI could be impactful. Examples are work on AI & Arts, AI & Policy, etc.

Submission will fall into one of these 4 tracks:

  1. Machine learning Algorithms
  2. Applications of AI 
  3. Position papers
  4. Product demonstrations

Work may be previously published, completed, or ongoing. The workshop will not publish proceedings. We encourage all Black researchers in areas related to AI to submit their work. They need not to be first author of the work.

Formatting instructions

All submissions must be in PDF format. Submissions are limited to two content pages, including all figures and tables. An additional page containing only references is allowed. Submissions should be in a single column, typeset using 11-point or larger fonts and have at least 1-inch margin all around. Submissions that do not follow these guidelines risk being rejected without consideration of their merits. 

Double-blinded reviews

Submissions will be peer-reviewed by at least 2 reviewers, in addition to an area chair. The reviewing process will be double-blinded at the level of the reviewers. As an author, you are responsible for anonymizing your submission. In particular, you should not include author names, author affiliations, or acknowledgements in your submission and you should avoid providing any other identifying information.

Travel grants

Use this link to apply for travel grants to the conference. They are available for eligible attendees, and should be submitted by  Wed July 31, 2019 11:00 PM UTC at the latest (Note that this is one day after the paper submission deadline).

Content guidelines

Submissions must state the research problem, motivation, and contribution. Submissions must be self-contained and include all figures, tables, and references. 

Here are a set of good sample papers from 2017: sample papers 

Questions? Contact us at bai2019@blackinai.org.