The White House Asks Communities To Lend A Hand In Evaluating The Equitable Concerns Of Biometric Technologies

KEY INSIGHTS

  • Facial analysis algorithms misclassified Black women almost 35 percent of the time.
  • Seventeen of the highest performing verification algorithms had similar levels of accuracy at identifying  Black women and white men.
  • The White House Office of Science and Technology Policy (OSTP)  is accepting a request for information (RFI) on current or planned uses of artificial intelligence (AI)-enabled biometric technologies in private and public sectors through January 2022.

The White House Office of Science and Technology Policy (OSTP)  will accept requests for information (RFI) until January 15, 2022, a formal way for the government to get information on current or planned uses of AI-enabled biometric technologies in the private and public sectors. Facial recognition technology is becoming heavily relied upon, from unlocking a phone to federal surveillance. However, the technology rouses concerns of privacy and equity, especially for Black Americans. 

According to 2018 Gender Shades research, some facial analysis algorithms misclassified Black women almost 35 percent of the time. computer scientists Joy Buolamwini and Timnit Gebru evaluated three commercial gender classification systems and showed that darker-skinned women are the most misclassified group (with error rates of up to 34.7 percent). The report also cited a similar study from Buolamwini and computer scientist Deborah Raji, at the Massachusetts Institute of Technology that saw that Amazon’s software was discriminatory.  

Reports show more error internationally than in the states in facial recognition technology.

A 2019 National Institute of Standards and Technology (NIST) report found higher rates of false positives for Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm. The report notes that false positives when a pair of photos of different individuals are wrongly identified as a match might present a security concern to the system owner, as they may allow access to impostors. 

In U.S. developed algorithms, there were similar high false positives in one-to-one matching for Asians, African Americans and native groups (including Native American, American Indian, Alaskan Indian and Pacific Islanders).

Other vendor tests reveal that NIST found that the most accurate algorithms—which should be the only algorithms used in government systems—did not display a significant demographic bias. For example, 17 of the highest performing verification algorithms had similar levels of accuracy for Black women and white men: false-negative rates of 0.49 percent or less for Black women  (equivalent to an error rate of less than 1 in 200) and 0.85 percent or less for white men (equal to an error rate of less than 1.7 in 200).

Several companies and government agencies follow suit in facial recognition accuracy practices and implement equity in the software. 

“In the growing use of autonomy and automated decision-making technologies impacts core civil rights principles regarding privacy and equity,” Dominique Harrison, Director of Technology Policy at the Joint Center for Political and Economic Studies, said during a panel on AI. “These principals underscore the importance of ensuring fairness in automated decisions, enhancing individual control over personal information and protecting people from inaccurate data.” 

American facial recognition company Clearview’s AI’s app was a law enforcement tool for years before an impartial third party tested its accuracy. For example, last month in a one-to-one test, Clearview performed well in its ability to match two different images of the same person, simulating the facial verification that people use to unlock their smartphones. 

However, after two rounds of federal testing last month, the tool’s accuracy is no longer a prime concern. The top 10 out of nearly 100 facial recognition vendors in a federal test intended to focus on which tools are best at finding the right face while looking through photos of millions of people. 

Ban the Scan is an international campaign banning surveillance from upholding human rights. Since 2017, cities like San Francisco, Boston and Oregon have banned facial recognition by the police. Last year, New York Police Department (NYPD) officers attempted to arrest activists and other heavily  Black populated areas in New York from using facial recognition cameras.   

Government agencies do not plan to stop incorporating more facial recognition technology. However, advocates on a state level and government analysis of AI will positively change equity in biometric advancements.

Sponsored Series: This reporting is made possible by the The Ewing Marion Kauffman Foundation

The Ewing Marion Kauffman Foundation is a private, nonpartisan foundation based in Kansas City, Mo., that seeks to build inclusive prosperity through a prepared workforce and entrepreneur-focused economic development. The Foundation uses its $3 billion in assets to change conditions, address root causes, and break down systemic barriers so that all people – regardless of race, gender, or geography – have the opportunity to achieve economic stability, mobility, and prosperity. For more information, visit www.kauffman.org and connect with us at www.twitter.com/kauffmanfdn and www.facebook.com/kauffmanfdn.

Cheri Pruitt-Bonner

Cheri is an Atlanta-based journalist who strives to serve the community by bringing nuance to each story. She has previously written about politics and government affairs for Georgia State University's student media.