Skip to main content
xYOU DESERVE INDEPENDENT, CRITICAL MEDIA. We want readers like you. Support independent critical media.

How Facebook Algorithms Promote Hate and Toxic Content

Whistleblower Frances Haugen has revealed hard evidence—internal Facebook documents—that show it knew its algorithms promote dangerous content but did nothing about it so that it could maximise its advertising revenue.
Facebook’s news censorship in Australia: A battle between monopolies

Facebook is in the limelight for both the right reasons and the wrong reasons. The wrong reason is that what was supposed to be a small configuration change took Facebook, Instagram and WhatsApp down for a few hours last week. It affected billions of users, showing us how important Facebook and other tech giants have become to our lives and other businesses. Of course, the much more significant issue is whistleblower Frances Haugen, a former employee, making tens of thousands of pages of internal Facebook documents public. The documents show that the leadership of Facebook repeatedly prioritised profit over social good. Facebook’s algorithms polarised society, promoted hate and fake news because it drove up “engagement” on its platforms. That it is tearing apart communities and even endangered young teens for not having “perfect” bodies matters not a jot to Facebook.

The Wall Street Journal has published detailed exposes quoting Facebook’s internal documents and Frances Haugen, the whistleblower who appeared on CBS 60 Minutes and Congressional hearings. “The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook,” Haugen told CBS. “And Facebook, over and over again, chose to optimise for its own interests, like making more money.”

The 37-old data scientist has filed eight whistleblower complaints against Facebook with the Security and Exchanges Commission (SEC) with the help of a non-profit organisation, Whistleblower Aid. These complaints are backed by hard evidence: the tens of thousands of internal documents she secretly copied before leaving Facebook.

Why is this big news when these issues have been raised time and again, more prominently after Cambridge Analytica? Did we not always know how Facebook, WhatsApp, and other platforms have become powerful instruments to promote hatred and divisive politics? Have UN investigators not held Facebook responsible for the genocidal violence against Rohingyas? Did we not see similar patterns during the communal riots in Muzaffarnagar?

The big news is that we now have evidence that Facebook was fully aware of what its platform was doing. We have it from the horse’s mouth: internal Facebook documents that Haugens has made public.

By privileging posts that promote “engagement”—meaning people reading, liking or replying to posts on Facebook, WhatsApp and Instagram—Facebook ensured that people stayed on its platform for much longer. Facebook users could then be “sold” to the advertisers more effectively and shown more and more ads. Facebook’s business model is not promoting news, friendly chit chat among users, or entertaining people. It is selling us to those who can sell us merchandise. And, like Google, it has a far better understanding of who we are and what we may buy. This is what gives Facebook 95% of its revenue and makes it one of only six trillion-dollar companies (September 2021) in terms of market capitalisation.

Testifying before Congress, Haugens said Facebook uses artificial intelligence to find dangerous content. The problem is, “Facebook’s own research says they cannot adequately identify dangerous content. And as a result, those dangerous algorithms that they admit are picking up the extreme sentiments, the division[s]...”

That this was happening is widely known and has been discussed even in our columns. Facebook’s answer was that they were setting up an independent supervisory board for oversight and employing a large number of fact-checkers. This and other processes would help filter out hate posts and fake news. What they hid was that all of this was simply cosmetic. The driver of traffic, or what you see in your feed, in Facebook’s terms—engage with—are driven by algorithms. And these algorithms were geared to promote the most toxic and divisive posts, as that drives up engagement. Increasing engagement is the key driver of Facebook’s algorithms and defeats any measure to detoxify its content.

The Haugens Congressional testimony directs us to the real problems with Facebook and what governments must do to protect citizens. “This is not to focus on the individual items of content people pose but on Facebook’s algorithms. It also brings back the instruments that countries have in their kitty to discipline Facebook” (italics added). These are the “safe harbour” laws that protect intermediaries like Facebook, who do not generate content themselves but provide their platform for what is called user-generated content. In the United States, it is section 230 of the Communications Decency Act; in India, it is section 79 of the Information Technology Act.

In the US, a section 230 overhaul would hold the social media giant responsible for its algorithms. In Haugen’s words, “If we had appropriate oversight, or if we reformed [section] 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking...Because it is causing teenagers to be exposed to more anorexia content, it is pulling families apart, and in places like Ethiopia, it’s literally fanning ethnic violence.” The key problem is not the hateful content that users generate on Facebook. It is Facebook’s algorithms, which continuously drive poisonous content on our Facebook feed, to maximise its advertising revenues. 

Of course, the widespread prevalence of toxic content on Facebook’s platforms gets help from its wilful neglect of not having language checkers in non-English and other European languages. Even though Hindi has the fourth-highest number of speakers and Bengali the fifth-highest, according to Haugens, Facebook does not have enough language checkers in these two languages.

In these columns, we have explained why divisive content and fake news have more virality than others. Haugens, with thousands of pages of Facebook’s internal research, confirms what other serious researchers and we have been saying all along. The algorithms that Facebook and other digital tech companies use today do not directly code rules to drive up engagement. They instead use machine learning, or what is loosely called artificial intelligence, to create these rules. It is the objective—increasing engagement—that creates the rules, which then leads to the toxic content on our feeds, and which is tearing societies apart and damaging democracy. We now have hard evidence, thousands of pages of Facebook’s internal research reports, that this is indeed what was happening. Worse, the Facebook leadership, including Mark Zuckerberg, were fully aware of the problem.

Not all the harm on Facebook’s platform was caused by algorithms. From Haugen’s documents, we find that Facebook had a “white list” of Facebookers whose content would be promoted even if they violated its guidelines. Millions of such “special” users could violate Facebook’s rules with impunity. We earlier wrote on the evidence from The Wall Street Journal on how Facebook India protected BJP figures although repeated flags regarding their posts were raised within Facebook. 

This is not all the Haugens treasure trove of Facebook’s internal documents reveal. Reminiscent of cigarette companies research on how to hook children to smoking young, Facebook had researched what it called “pre-teens”, children in the age group of 9-12 years. Their research was on how to hook pre-teens to Facebook’s platforms so that they would have an unending supply of new consumers. This is despite their internal research showing that Facebook’s platforms promoted anorexia and other eating disorders, depression, and suicidal tendencies among the young.

All these should damage Facebook. But it is a trillion-dollar company and one of the biggest in the world. Its fat cash balance, coupled with the power it wields in politics and ability to “hack” elections, provide the protection that Big Capital gets under capitalism. The cardinal sin that Capital may not tolerate is lying to other capitalists. The internal documents that Haugens has submitted to the Security and Exchanges Commission (SEC) could finally lead to pushback against social media giants and their regulation. If not strong regulation, it could lead, at least, to some weak constraints on the algorithms that promote hate.

To end, I quote from a ten-year-old interview with a young tech-whiz. Jeff Hammerbacher, a 28-year-old Silicon Valley whiz, said, “The best minds of my generation are thinking about how to make people click ads.” This is what is driving the march of social media giants to their trillions

Get the latest reports & analysis with people's perspective on Protests, movements & deep analytical videos, discussions of the current affairs in your Telegram app. Subscribe to NewsClick's Telegram channel & get Real-Time updates on stories, as they get published on our website.

Subscribe Newsclick On Telegram

Latest