Bias in facial recognition isn’t hard to spot, but it’s hard to get rid of


Joy Buolamwini is a researcher at the MIT Media Lab who pioneered research into biases embedded in artificial intelligence and facial recognition. And the way she got to this job is almost a bit too much on the nose. As a graduate student at MIT, she created a mirror that would project ambitious images on her face, like a lion or tennis star Serena Williams.

But the facial recognition software she installed wouldn’t work on her black face, until she literally put on a white mask. Buolamwini is featured in a documentary titled “Coded Bias”, which airs tonight on PBS. She told me about a scene in which facial recognition technology was installed in an apartment complex in the Brownsville neighborhood of Brooklyn, New York. The following is an edited transcript of our conversation.

Joy Buolamwini (Photo courtesy of Buolamwini)

Joy Buolamwini: In fact, a tenant association reached out to us and said, “Look, there’s this landlord, he’s setting up the system using facial recognition as an input mechanism. Tenants don’t want it. Can you support us? Can you help us understand the technology a bit and also its limitations? »And what I found is that the tenants, they already had [the data we wanted]. And it wasn’t just a question of how well these technologies performed – it seemed like every group we saw wrestling was the predominant group in that building. But it was also a matter of agency and control of even having a voice and a choice to say, “Is this a system that we want? “

Molly Wood: You know, I get the feeling that there is this kind of double whammy with this technology, that there is a built-in bias. And because it’s fundamentally good for use in surveillance and punishment, it feels like it’s used almost disproportionately in communities where it’s least likely to be effective or cause the most of problems, frankly.

Buolamwini: Absolutely, and there, we see that if it doesn’t work, technically speaking, we have identification errors, we have false arrests and so on. But even if it works, you can still optimize these systems as tools of oppression. So putting surveillance tools in the hands of the police, where we repeatedly see the over-criminalization of communities of color, is not going to improve the situation, it simply automates what is already happening.

Drink: Now there is this industry of algorithm auditors who have to tell companies if there is any bias in their work. Can this be a solution, or are these issues too basic?

Buolamwini: I think algorithmic auditing absolutely has a role to play in the ecosystem when it comes to accountability and understanding the capabilities and limitations of AI systems. But what I often see when it comes to algorithmic auditing is that it is done without context. So if you’re just auditing an algorithm in isolation or a product that uses machine learning in isolation, you don’t necessarily understand how that will affect people in the real world. And so it’s a bit of a Catch-22 in that, “Well, if we don’t know how this is going to work on people in the real world, should we deploy it?” But that’s why you want to have systems like Algorithmic Impact Assessment because it’s about looking at the whole design – is that even a technology that we want?

If there are benefits, what are the risks? And most importantly, do we include the “excoded”, the people most likely to be injured if the deployed systems are not functioning properly? And I think that’s an important place to include people. And beyond the algorithmic audit, you really have to think about the repair. So you can audit the systems, you can do your best to try to minimize bias, try to minimize damage. But we also have to keep in mind that systems are fallible, there will be mistakes in the real world. So what happens when someone is injured? And that’s part of the ongoing work we’re doing with the Algorithmic Justice League to examine: What does harm redress look like in the context of an AI-powered world? Where do you go when someone hurts you? And we want this place to be the Algorithmic Justice League.

Drink: Tell me more about this sentence, the “excodé”. I’ve never heard this before.

Buolamwini: Excoded is a term I coined as I saw the people who suffer the most at the hands of algorithms of oppression, exploitation or discrimination. So that’s a way of describing those who are already marginalized in society. And no one is immune, but those who are already the most marginalized bear the brunt of the failures of these systems.

Drink: We are at a time when companies that attempt to improve the ethics of AI, or at least have the burden of improving the ethics of AI, in some cases are laying off the personalities of color they do. hire to work on these issues. Are we going backwards?

Buolamwini: What we are finding is that change cannot come only from within, because the work you do will fundamentally challenge the power of these companies, and it will also fundamentally challenge the results. If you find that there are adverse effects or harmful biases in the systems they create, or even impacts on climate change, then it forces companies to consider those externalities and impacts. And so it may be easier to try and get rid of searchers who report those issues you hired them for in the first place, instead of tackling them head on.

And in some ways, it seems businesses want it both ways. They want to be able to say, “We have a team. We are looking at AI issues, ethics and we are concerned. But when it comes to really looking at issues of justice, who has the power, really redistributing that power and having the ability to say no to harmful products, or harmful uses of it. AI, even if it means less profit, companies are not incentivized to do it by construction, and neither should we expect them to do so. So when I see the sacking of Dr Timnit Gebru, for example, or subsequently Meg Mitchell – a pioneer in this field when it comes to examining the harms of algorithmic systems – it’s a red flag major that we cannot count on. change from the inside alone. We need laws, we need regulations, we need outside pressure, and this is where business is responding. But change will not come only from within because the incentives are not aligned.

Related Links: More information from Molly Wood

Here’s a PBS description of the documentary, which includes a summary of reviews, which many note the documentary actually manages to hope for on such a difficult and infuriating subject. And thanks to the work of Buolamwini and other researchers and mathematicians, the problem of AI bias is now more widely recognized, and some companies and researchers are trying to find proactive ways to eradicate it.

There is a good story on ZDNet about Deborah Raji, a researcher at the Mozilla Foundation who studies algorithmic damage and works with Buolamwini’s organization, the Algorithmic Justice League. One thing Raji has explored is whether companies could use bug bounties – a system in which companies pay ethical hackers to find security holes – to trick data scientists into trying to spot instances of bias. . Of course, as the article notes, the biggest obstacle to such a solution is that there is not yet an accepted standard that defines algorithmic damage – let alone an actual method for detecting bias. Which tells you how new the discovery of this bias really is and how dangerous it is for AI technology to spread more and more around the world every day, in cameras and speakers. smart and recovery scanners and search engine results and map directions and bank loan applications and medical decision making and policing. So more, please, and faster. White masks are not the answer.

Source link

Leave A Reply

Your email address will not be published.