San Francisco recently passed an ordinance controlling the use of facial recognition in the city. The ordinance was in large part thanks to the pioneering research of Joy Buolamwini.

The argument against the technology is twofold: first, the technology is highly invasive in public spaces and may constitute a direct threat to basic (US) constitutional rights of freedom of assembly; secondly the feature extraction and training set construction methodologies (for newer deep learning based models) have been shown to have racial and gender biases “baked in”. For example, the systems analyzed in Buolamwini’s work are less accurate for Black people and women — either because the data sets used for training include mostly white male faces, or the image processing algorithms focus on image components and make assumptions more common to European faces.

Consider uses in policing, where an inaccurate system mis-identifies a Black or LatinX person as a felon. Especially when there is no transparency into the use or internals of such systems, the chances for abuse and injustice are in incredible. Despite these concerns, Amazon shareholders think it is ok to release the technology on the public.

Do you know if such a system is deployed in your city? If so, are there measures to control its use, or make audits available to your community? If not, have you considered contacting your elected representatives to support or discuss appropriate safeguards?

Posted by charlescearl

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.