Advocacy Organizations Reshape AI Technology To Benefit People of Color

A group of civil rights activists and other tech advocacy organizations provided suggestions to the National Institute of Standards and Technology (NIST) on how to make the tech less discriminatory. Recommendations were directed at NIST’s Identifying and Managing Bias in Artificial Intelligence (SP 1270) Proposal that locates and manages AI bias. 

NIST is a physical sciences laboratory agency of the United States Department of Commerce. The proposal was released for comment on June 22, several organizations recently responded in a letter with equitable recommendations for uses of AI. 

“The recommendations came collectively from the organizations and they are based on experience in the field and what we have seen in data, specifically Home Owner and Disclosure Act (HMDA) data, National Service of Mortgage Orientations (NSMO) data and Census data,” Michael Akinwumi, Chief Tech Equity Officer of National Fair Housing Alliance, told The Plug.

The proposed upgrades include action plans for equality standards in consumer protections and more diverse staff to oversee AI applications at NIST and other federal agencies. The groups also suggest implementing civil rights and AI training to instill regular anti-discrimination practices in AI.

The letter advises sharing publicly available collections of AI data, methods and feedback. NIST should routinely analyze the efficacy of specific uses of AI and the effects on people and communities of color and other minorities, according to advocates.

Machine learning systems routinely fail to recognize people of color and multiple genders. The most egregious AI mistakes came from Amazon, Microsoft and IBM. According to an ACLU study, Amazon’s surveillance tool, Rekognition, incorrectly matched 28 members of Congress to criminal databases. Most of the incorrect matches were people of color, six were members of the Congressional Black Caucus, among them civil rights legend John Lewis.

A 2019 study showed that when it comes to face recognition, error rates for dark-skinned Black women were above 35 percent compared to less than one percent for light-skinned people. AI systems from major companies misinterpreted some of the most well known Black people like Oprah Winfrey and Michelle Obama. 

Without proper care, AI could potentially extend historic discrimination. For most of recent history, Black communities have been excluded from economic advances partly because of restrictive government policy decisions. 

The New Deal’s federal Home Owners Loan Corporation (HOLC) created the Home Owner’s Loan Act of 1933 where a mapping system mandated race as a primary factor in determining neighborhood quality. The map was drawn with randomly chosen data and prejudiced real estate professionals enforced the inequitable housing standards. 

The Federal House Administration would later use the map to structure mortgage insurance underwriting decisions. Those race-based decisions were not only the leading views on housing but systemically amplified biased practices across the nation. As a result, Black Americans continue to struggle in the housing market with far lower rates of homeownership.

The U.S. could face the same problems of systemic bias and restrictive policies based on unchecked AI. The letter with policy recommendations for NIST outlines acting quickly to determine how AI functions to either benefit consumers in the financial market or builds on preexisting discriminatory factions. AI is quickly taking over consumer financial services and could have a worse impact on borrowers of color than the HOLC ever could.

“We understand that AI systems can be technically accurate and still perpetuate harms and discrimination, which is why our ongoing efforts to develop community-driven methods of identifying and managing risk specific to bias are so critical,” Reva Schwartz, Principal Investigator of AI Bias, and Elham Tabassi, NIST Information Technology Chief of Staff, told The Plug in a joint statement.

In Spring 2022, NIST will create detailed technical guidance from both the proposed enhancements and from learnings from several workshops. Debates on AI use cases continue. 

In August, Stanford University researchers announced a new language formation foundation for AI, Generative Pretraining Transformer 3 (GPT-3). A co-authored research paper from the Center for Research on Foundation Models (CRFM)  describes new foundations and emerging models for creating AI systems. Immediate backlash ensued as many critics did not like the limited capabilities and the behavior of the AI models.

CRFM actively invites organizations and schools to work together on the impacts of GPT-3. GPT- 3 is developed from data training for general purposes on language software. Academics tout the software as being an advantage for understanding the world better, but advocates warn the software inherits errors that pose a threat to data on minorities. 

Federal facilities and private sectors welcome debate of current and pre-existing issues in AI. More attention on local companies and big tech advantages can provide better outlines for companies as the technology progresses.

Cheri Pruitt-Bonner

Cheri is an Atlanta-based journalist who strives to serve the community by bringing nuance to each story. She has previously written about politics and government affairs for Georgia State University's student media.