With this past Black History Month misstep, Google forgot how racist the internet is

This past February, Google made it easier for everyone to support Black businesses with its “search Black-owned near you” feature. This new feature has been heavily advertised and promoted.

However, businesses and customers noticed a downside to Google’s Black History Month stunt: a surge of overwhelmingly racist reviews on business profiles.

We live in a world where online reviews matter. After stock trading app Robinhood shut down Gamestop’s stock purchases, thousands of angry people took to the Google Play Store reviews section of the app. In just one day, Robinhood’s rating fell from five stars to one star and Google swept in to delete nearly 100,000 negative reviews, saying that the reviews were “inorganic.”

Forbes reports that 93% of people read local reviews to make a shopping decision. So when Black businesses are sabotaged by similarly “inorganic” racist reviews, their business suffers—yet Google both reaps the profits from consumers using their search engine and benefits from good press about their “wokeness.”

Google either did not consider this possibility or decided to ignore it, even though white supremacists have long used the internet as a means of harassing and targeting Black people. This has been even more evident within the last few years as we’ve seen white-supremacist rhetoric online lead to physical violence offline. Knowing this, the decision for Google to spotlight Black-owned businesses without thinking about the harm that could happen is a perfect example of how tech companies fail to think about the social context of their actions and how that, in turn, causes harm to Black communities.

This is part of a long pattern. Just two months ago, Google fired Timnit Gebru, a prominent Black researcher who has done groundbreaking work showing facial recognition software’s bias against people of color. The company maintains that Gebru resigned: Part of its reason for locking her out of her accounts before she’d actually tendered a resignation was because she’d sent an internal memo criticizing the company’s diversity, equity, and inclusion efforts. And in February, Google fired another AI ethics researcher who was looking for evidence of discrimination against Timnit Gebru, claiming she’d violated corporate conduct and security policies. But the rest of us see a pattern of disrespect towards people of color and anyone who dares to call out racism.

Google’s actions reveal an even deeper problem: the Silicon Valley belief that tech is neutral.

Let’s be really clear, tech is not neutral. Tech, by which I mean software and hardware, cannot be neutral because the world we live in is not just or equal. Take one of Timnit Gebru’s areas of expertise: facial recognition. Facial recognition algorithms falsely identify Black and brown faces 10 to 100 times more than white faces. Nonetheless, tech companies disingenuously pretend tech serves everyone equally, which allows their platforms and products to be used to harm marginalized communities, disrupt democracy, and spread authoritarianism. Tech corporations then profit from the harm they inflict while claiming they are innovative and free of prejudice. Google wasn’t neutral in jeopardizing Black-owned businesses or firing Black employees. Parler wasn’t neutral in making it possible for violent traitors to plan a coup. And Facebook wasn’t neutral in ignoring calls—until it was too late—to de-platform Donald Trump.

If tech companies like Google continue to operate under the false assumption that tech is neutral, then even their well-intentioned ideas, like spotlighting Black-owned businesses, will fall short. And their more obviously harmful actions—like selling technology to hundreds of police departments around the country that is used to target people of color—will continue to be devastating.

What are tech companies to do?

They can start by acknowledging that tech is built with bias and ask themselves questions like: If data was leaked, who could get hurt? Who could use our technology to harm others? What steps do we take to mitigate harm?

These are complicated issues, and change won’t happen overnight. Tweeting or placing ads about your so-called commitment to racial justice is easy; the real work is examining your policies, behaviors, and products for bias. Until then, tech will not work for us all.

Posted in

Nathan Odige