Some researchers from African countries were barred from attending last week’s A.I. conference in Montréal due to visa issues.
The story is covered in this Wired article
The organizers of the Black in A.I. workshop were working around the clock with immigration experts, but at least 40 presenters were not able to attend.
Even Justin Trudeau has promised an investigation, though after the fact.
This story again puts front and center the ways in which technologies that offer so much in the way of potential are being circumscribed by the diminishing commitment to open and fair borders.
A major AI conference will be hosted in Addis in 2020. Will that be a step towards opening the borders of research and development in AI?
Feeling some despair headed back to Georgia, and the U.S. generally after a month in India.
It’s about this question: Will Brian Kemp govern Georgia like Lester Maddox?
Lester Maddox became the governor of Georgia during my childhood. He was openly racist, was famous for selling axe handles with which to beat down civil rights activists, and actively fought a state memorial immediately after Dr Martin Luther King was assassinated.
Until Brian Kemp, I don’t remember any Georgia governor who so actively and openly embraced race based voter suppression and racist immigration messages.
Yet Maddox apparently went on the most aggressive hiring of African Americans in the state’s history.
I’m finding solace knowing that people of color in Georgia nonviolently sacrificed their livelihoods and their lives to end the Maddox mode of governance.
Those sacrifices opened the door for people like Stacey Abrams, Andrew Young, and Jimmy Carter.
Our grandmothers and uncles and neighbors did this with an inner soul power (to quote Dr King) that could not be suppressed by axe handles or tear gas. I find strength knowing that we can call on that soul power to do the same again.
Wired recently posted a piece on several of the ethics-related workshops and talks at this year’s NIPS conference. The article includes coverage of the Black in A.I. and Woman in Machine Learning workshops as well as talks like Victoria Krakovna’s on interpretability for A.I. safety.