Category: inclusion

AIAlgorithmsinclusionMachine LearningSocial Justice

AI and the War on Poverty

A.I. and Big Data Could Power a New War on Poverty is the title of on op-ed in today’s New York Times by Elisabeth Mason. I fear that AI and Big Data is more likely to fuel a new War on the Poor unless a radical rethinking occurs. In fact this algorithmic War on the Poor seems to have been going on for quite some time and the Poor are not winning.

Mason posits that AI and Big Data provide three paths forward from the trap of inequality: 1. The ability to match people to available jobs; 2. the ability to deliver customized training that enables people to perform those jobs; and 3. the ability to algorithmically deliver social welfare programs in a more efficient manner.

The first objective seems within the realm of Indeed.com and LinkedIn’s recommendation algorithms and second — personalized training — has a long history in AI systems development. The problem is access: how do you get one of the “good middle-class jobs” in San Francisco when you live in Atlanta and attend a high school that lacks the coursework to prepare you for Stanford? How do you get access to an immersive 3D training environment when your family can’t afford to put down 100 a month for high speed internet and your school lacks the equipment also?

The third part of Mason’s strategy is the most problematic. We’ve seen AI (meaning machine learning and decision making algorithms) used to enforce biased sentencing practices; seen how skewed training data can lead to racial bias in facial recognition; and the use of data-driven methods in predatory lending has also been documented. These examples constitute the tip of a deep problem and still largely un-addressed problem in AI. In short, if the algorithms on which our hopes for transformation are pinned learn from data that reifies the structural racism at the root of social inequity, then we’re simply finding a more optimal route to oppression.

Before we hand over the lives and futures of the most vulnerable members of society to algorithms that we are still trying to fathom, we should strive first for accountability and transparency in algorithms. The efforts underway in New York City to insure algorithmic ethical accountability is one start.

But if machine learning and AI are the new tools of our age, we should empower all people to put the computational tools and conceptual frameworks of data science to work for them. Black Lives Matter activists took the social networking tools to organize protests and share video that has changed and empowered. What could a coming generation do with additional visualization and analytical tools?

It was the prospect of using AI to empower education that first attracted me to the field. I think that the emerging technology has some good to do. But the process must necessarily be participatory. When artists, educators, poets, activists, grocery store owners, gardeners — everyone — can be given access to the tools then I’ll bet on the human capacity to find new paths to expression and opportunity.

AIinclusion

Black in A.I.

The Black in AI workshop at this year’s NIPS conference could be among the most important events in artificial intelligence this year.

Why? If you think the AI in your phone, car, or bathroom is free of racist or sexist biases, then respectfully give this op-ed by mathematician Cathy O’Neil a close read and begin educating yourself.

Further, the societal threat posed by systems capable of intentionally exploiting racial fears and prejudice should be evident by now.

Developing AI that is free of the gender, racial and other prejudices that continue to mar our society is an immense task and as many have pointed out, there is no one algorithm or tool that will get us there. One part of the solution is opening up the field to scientists, developers, and thinkers from all backgrounds so that norms of oppression and exclusion are questioned and ultimately ushered into the museum.

In that sense, the Black in AI workshop is an important contribution. The stated goal of the workshop is to provide a forum to nurture and develop researchers who are Black — thus promoting the inclusivity of a field that shamefully homogenous.

Does this mean that the inclusion of Black people in AI will spell the end of racist AI? Probably not, but clearly the near exclusion of Black folk has given biased systems a free pass. Perhaps a Black AI researcher might be more inclined to raise concerns about algorithmic bias in facial training sets , or even design commercial facial recognition systems with ethnic diversity baked in. But more importantly, a Black blogger, professor, lecturer, developer might foster important shifts in the way their readers, students, and customers view the world. That has to be a step forward for humanity.