Social Media

How Social Media Companies Need To Address Disinformation Globally

Voiced by Amazon Polly

Michael Posner

Civil society and rights groups in Myanmar said Facebook has failed to adequately act against online hate speech that incites violence against the country's Muslim minorities, neglecting to effectively enforce its own rules.

Civil society and rights groups in Myanmar said Facebook has failed to adequately act against online hate speech that incites violence against the country’s Muslim minorities, neglecting to effectively enforce its own rules. FUTURISM.CON

Political activists and autocratic governments are misusing the Internet to distribute massive amounts of deliberately false information. This harmful content is undermining political discourse and encouraging extremism in the U.S. and elsewhere. The debate about this issue has focused disproportionately on the U.S. and Europe, and partly for this reason, the big Internet companies have dramatically underinvested in their operations in the global South.

Each of the major online platforms–Google, Facebook, and Twitter–have been bold in heralding their global growth to investors. Most of this growth is occurring in places like India, Nigeria, and Brazil.  But the companies have failed to devote proportionate resources to serving their users in these places because advertising revenues there still are relatively modest. The Internet giants are trying to have it both ways: celebrating growth, on the one hand, but failing to provide essential support and oversight, on the other.

One of the risks to which the Internet platforms should be paying more attention is political disinformation. In the last decade, three broad categories of disinformation have emerged. The first concerns governments that seek to use online disinformation as a weapon against other states. Russian interference in Europe and the United States is the prime example. Second are the cases where authoritarian governments have misused the Internet to attack their domestic political opponents or to enlist their own citizens to oppress ethnic or religious minorities.  The systematic attacks by Myanmar government leaders against members of the Rohingya Muslim community is a dramatic example. In a third category, political parties, rather than governments, are using social media accounts to promote disinformation discredit their rivals.  The flood of disinformation on WhatsApp and Facebook accounts during recent elections in Brazil and India are instructive examples.

Though the Internet companies are well aware of each of these challenges, to date none of them have made necessary investments to oversee content on their sites, and they have opted not to assume full responsibility for mitigating the damage caused by disinformation and harmful content online. As Internet access expands, this problem is accelerating, and it now threatens democratic discourse everywhere.

In Myanmar, where 93% of online users rely on Facebook as their gateway to the Internet, the country’s military government and its supporters began to use Facebook in 2013 to mount a propaganda campaign against the Rohingya Muslims.  Facebook had ample warning about these online attacks but failed to respond in a timely manner.  In 2018, when the company finally recruited a local team to monitor its site and took down Facebook accounts of several senior military leaders, much of the damage had already been done.

In Brazil, Jair Bolsonaro, now the country’s president, used WhatsApp accounts last October to communicate his highly inflammatory, divisive, and often false messages to win a surprise victory. In the recent Indian election, Prime Minister Narendra Modi’s Bharatiya Janatu Party (BJP) exploited longstanding divisions tied to social and religious identities to rally his base against the rival Indian National Congress. The BJP mobilized hundreds of thousands of neighborhood WhatsApp users to pump out messages filled with inflammatory disinformation.

The spread of disinformation on a massive scale has become the new normal.  The Internet giants have exacerbated the problem by refusing to recruit sufficient local staff in all of the places where they operate and by steadfastly refusing to accept primary responsibility for taking down provably false information, especially in the political realm. Instead, they outsource the task of identifying disinformation by enlisting nongovernmental organizations to make these judgments. When disinformation is found, the companies only reduce its prominence, rather than taking it down.

In the face of this insufficient action by these companies, an increasing number of governments now are moving to regulate political content. In recent months, Australia and Singapore have passed new laws allowing government to police online platforms aggressively. The governments of the UK and France are debating similar proposals. Government restrictions on content pose a threat to the basic human right of free speech. Draconian restrictions imposed by the governments of China, Iran, and Russia illustrate this danger.

Prompted in part by these and other threats of government regulation and fines, as well as potential liability under antitrust laws in Europe and the U.S., the Internet companies have begun to take modest steps to address aspects of the disinformation problem. To date, none have adopted a comprehensive or sufficient response.

Here are three broad areas where the companies now need to act: First, each company needs to build staff capacity in all of the places where they are doing business. They cannot outsource this agenda because they alone have the access and tools to best address disinformation in real time.  This will require a significant investment in people. Local staff in each of these countries will provide the companies with a much-needed understanding of local culture and politics and capacity to review content in multiple local languages.

Second, the companies need to acknowledge their responsibility to remove provably false disinformation from their sites, especially in the political realm. These decisions will not be easy and will require serious-minded appraisals and a rigorous appeals process. Material removed should be limited to that which is provably false. Currently, the companies cling to the untenable premise that they are not “arbiters of the truth” and thus have no responsibility to take down provably false information. And while they are not news organizations like The New York Times, they are much more than passive platforms. Given the rising volume of deliberately false information and the damage it is doing to democratic discourse, the companies need to adopt this new paradigm, one that will set practical but principled rules for Internet governance and establish clear operational guidelines.

Finally, to oversee this challenging agenda, each of these companies needs to hire a senior content overseer to direct their efforts and signal the central importance of content governance. These new executives should report directly to C-suite executives and should be individuals with a serious news media background. Currently these responsibilities are dispersed to people in different divisions with different mandates–for example, those who oversee community standards, ferret out inauthentic sources or address cyber-security. The hiring of a content overseer would coordinate these disparate efforts and underscore the importance these issues play in each company.

This is an ambitious agenda, one that will require the investment of significant additional resources. The Internet is a force for good but also a tool that, if not properly governed, poses an existential threat to our security, prosperity, and democratic order. The stakes for reform could not be higher.

Michael Posner

Michael Posner Contributor

I am the Jerome Kohlberg professor of ethics and finance at NYU Stern School of Business and director of the Center for Business and Human Rights. I served in the Obama … Read More

Leave a Reply