RADIO AI - A Public Resource for AI Literacy (for Everyone)

RADIOAI Ep.3.3: MIT's Algorithmic Justice - The People VS. Technology (Face Recognition)

May 06, 2021 Dr. Cindy Mason , Dr. Henry Lieberman Season 3 Episode 3
RADIOAI Ep.3.3: MIT's Algorithmic Justice - The People VS. Technology (Face Recognition)
RADIO AI - A Public Resource for AI Literacy (for Everyone)
More Info
RADIO AI - A Public Resource for AI Literacy (for Everyone)
RADIOAI Ep.3.3: MIT's Algorithmic Justice - The People VS. Technology (Face Recognition)
May 06, 2021 Season 3 Episode 3
Dr. Cindy Mason , Dr. Henry Lieberman

 RADIO AI Episode 3.3 discusses the problem of algorithmic justice with Henry Lieberman, an AI machine learning expert at MIT .  Machine learning algorithms and models can be found across many many apps.  They have become institutionalized, making life impacting decisions like hiring and firing or prision sentences. Staggering errors with this technology mostly affect women and minorities.  Joy Buolamwini at MIT found this out the hard way when she tried to build  a magical mirror for a class project.  The face recognition in the mirror would only work if she wore a white face mask.  Wow.  Investing further, she discovered how widespread and devastating this problem was. So she created The Algorithmic Justice League.  Based on the work of the Algorithmic Justice League, many cities have now revised or removed their face recognition technologies until the error rates improve.  Algorithm justice means data used to train machine learning for predicting our words, recognizing our  faces and voices,    represents all of us, or that the algorithms themselves can take reality into account.  What if, before an algorithm or technology is unleashed on the public it was vetted to be sure it does not create harm or that it actually makes our lives better, not more annoyed? Right now, companies have a predatory relation with people.   When Henry says corporations benefit when they have cooperative relations with customers he hits the nail on the head.

Show Notes

 RADIO AI Episode 3.3 discusses the problem of algorithmic justice with Henry Lieberman, an AI machine learning expert at MIT .  Machine learning algorithms and models can be found across many many apps.  They have become institutionalized, making life impacting decisions like hiring and firing or prision sentences. Staggering errors with this technology mostly affect women and minorities.  Joy Buolamwini at MIT found this out the hard way when she tried to build  a magical mirror for a class project.  The face recognition in the mirror would only work if she wore a white face mask.  Wow.  Investing further, she discovered how widespread and devastating this problem was. So she created The Algorithmic Justice League.  Based on the work of the Algorithmic Justice League, many cities have now revised or removed their face recognition technologies until the error rates improve.  Algorithm justice means data used to train machine learning for predicting our words, recognizing our  faces and voices,    represents all of us, or that the algorithms themselves can take reality into account.  What if, before an algorithm or technology is unleashed on the public it was vetted to be sure it does not create harm or that it actually makes our lives better, not more annoyed? Right now, companies have a predatory relation with people.   When Henry says corporations benefit when they have cooperative relations with customers he hits the nail on the head.