Skip to main content
Categories
News

Techno-Racism: People of Color’s New Enemy

As protesters take to the streets to fight for racial equality in the United States, experts in digital technology are quietly tackling a lesser known but related injustice.

It’s called techno-racism. And while you may not have heard of it, it’s baked into some of the technology we encounter every day.

Digital technologies used by government agencies and private companies can unwittingly discriminate against people of color, making techno-racism a new and crucial part of the battle for civil rights, experts say.

“It’s not just the physical streets. Black folks now have to fight the civil rights fight on the virtual streets, in those algorithmic streets, in those internet streets,” says W. Kamau Bell, host of the CNN original series “United Shades of America.” {snip}

{snip}

Techno-racism describes a phenomenon in which the racism experienced by people of color is encoded in the technical systems used in our everyday lives, says Mutale Nkonde, founder of AI For People, a nonprofit that educates Black communities about artificial intelligence and social justice.

{snip}

It gained new traction last year as the title of a webinar with Tendayi Achiume, a UN special rapporteur on racism, based on a report she wrote. Achiume and other experts argue that digital technologies can implicitly or explicitly exacerbate existing biases about race, ethnicity and national origin.

{snip}

Or in other words, as Bell says in Sunday’s “United Shades” episode:

“Feed a bunch of racist data, collected from a long racist history … and what you get is a racist system that treats the racism that’s put into it as the truth.”

{snip}

Facial recognition technology uses software to identify people by matching images, such as faces in a surveillance video with mug shots in a database. It’s a major resource for police departments searching for suspects.

But research has shown that some facial analysis algorithms misidentify Black people, an issue explored in the Netflix documentary, “Coded Bias.” The American Civil Liberties Union describes facial surveillance “as the most dangerous of the many new technologies available to law enforcement” because it can be racially biased.

“Although the accuracy of facial recognition technology has increased dramatically in recent years, differences in performance exist for certain demographic groups,” the United States Government Accountability Office wrote in a report to Congress last year. For example, federal testing found facial recognition technology generally performed better when applied to men with lighter skin and worse on darker-skinned women.

{snip}

What are some other examples of techno-racism?

  • Unemployment fraud systems

Some states are using facial recognition to reduce fraud when processing unemployment benefits. Applicants are asked to upload verification documentation, including a photo, and their images are matched against a database to verify their identity.

“This sounds great, but commercial facial recognition technologies used by Amazon, IBM and Microsoft have been found to be 40% inaccurate when identifying Black people,” Nkonde said.

“So this will lead to Black people being more likely to be misidentified as attempting to commit fraud, potentially criminalizing them.”

  • Risk assessment tools

One such tool is the mortgage algorithms used by online lenders to determine rates for loan applicants.

{snip}

In 2019, a study by UC Berkeley researchers found that mortgage algorithms show the same bias to Black and Latino borrowers as human loan officers. It found that bias costs people of color up to half a billion dollars more in interest every year than their White counterparts.

{snip}

How else can we fight techno-racism?

When technology reflects biases in the real world, it leads to discrimination and unequal treatment in all areas of life. That includes employment, home ownership and criminal justice, among others.

One way to combat that is to train and hire more Black professionals in the American technology sector, Nkonde said.

She also said voters must demand that elected officials pass laws regulating the use of algorithmic technologies.

In 2019, federal lawmakers introduced the Algorithmic Accountability Act, which requires companies to review and fix computer algorithms that lead to inaccurate, unfair or discriminatory decisions.

{snip}