Amid the invasion of Capitol Hill four months ago, social media giants Facebook, Twitter, and YouTube banned then-President Donald Trump from their platforms, citing repeated violations of the Terms of Service and incitement to violence. The unilateral decisions to de-platform a world leader – long-overdue argued some – sent shockwaves reverberating across the globe, stirring public debate about the ever-growing power of tech giants and the lack of accountability. The 2021 Democracy Perception Index survey, published last week, finds that roughly half of people globally (48%) consider Big Tech a threat to democracy in their country, trailing behind only economic inequality and limits on free speech. Moreover, there is little variation between democracies and non-democracies, as well as differences between regions, a testament to the issue’s universality.
Corporate Digital Governance in Action
Alongside the Democracy Perception Index survey came news of the verdict on Trump’s Facebook ban. Facebook’s Oversight Board upheld the decision, although criticizing its arbitrary and indefinite nature, ordered Facebook to review the ban and “justify a proportionate response” that applies equally to all users. The independent oversight board, set up on October 22, just weeks before the 2020 U.S. Presidential Election, has a diverse set of members from different continents, ranging from human rights lawyers, former CEOs, scholars, journalists, and former heads of state. The quasi-judicial board was first proposed in early 2018 to ensure people can appeal content policing decisions.
As a private company, Facebook can legally police and censor content according to its own guidelines. However, approaching three billion active users, the platform has become a so-called “quasi-public space.” On such a large platform, arbitrary censorship and algorithmic biases have significant implications for freedom of speech and people’s everyday lives, prompting mounting calls for regulators to step in. The creation of the Oversight Board, which issues non-binding judgments, is Facebook’s response to the ever-growing task of moderating online content, as well as an attempt to keep lawmakers from getting involved.
Without getting too much into the board’s rulings themselves, the contents of the cases taken up thus far are illustrative of the difficulties with global moderating policies. For example, there is the Dutch cultural tradition of Zwarte Piet (Black Pete), which includes the use of blackface, widely seen as a harmful racial stereotype in many other countries. There is the question of context-dependent acceptance of nudity and of underlying intent when sharing controversial or otherwise hateful content – is it an act of support or an attempt to inform and educate? Views and norms on such issues can certainly vary both within and across cultures and regions. As such, Silicon Valley’s one-size-fits-all policy is increasingly faced with the multicultural realities of today’s globalized Internet, thus necessitating the creation of a diverse Oversight Board.
Too Little, too Late, on Both Sides of the Atlantic
Even among those supportive of ‘false information warnings’ on Trump’s social media posts, the initial Trump ban caused great concern in Europe. Chancellor Merkel called the move “problematic,” EU Commissioner Thierry Breton dubbed it the “9/11 moment of social media,” and Prime Minister Boris Johnson joined the French Finance Minister Bruno Le Maire in calling for a debate on regulations. Condensing the sentiments, EU High Representative/Vice President Borrell argued the bloc needed to “better regulate the contents of social networks, while scrupulously respecting freedom of expression”. There is an emerging understanding that Europe cannot rely on U.S. legislation, especially given conflicting views on issues such as data privacy, digital taxation, and artificial intelligence.
These heightened calls for regulations come alongside the EU Commission’s already existing push to legislate Big Tech and “quasi-public spaces”. The Digital Services Act, proposed in December 2020, is set to regulate online content rules, introduce new scrutinizing powers for online platforms, and enforce user rights’ safeguards. Meanwhile, the Digital Markets Act would make it possible to penalize large online platforms for monopolistic behaviors. Some impatient countries, including Germany and France, have already pushed through national legislation in recent months, with Hungary and Poland set to follow next.
There is growing bipartisan support for legislative action across the Atlantic as well. At 62 percent, Big Tech is considered the single-largest threat to U.S. democracy. Moreover, like Trump before him, albeit for different reasons, President Biden has shown support for a rework of the U.S. Communications Decency Act to hold online platforms responsible for user content. Meanwhile, the Federal Trade Commission is suing Facebook for monopoly practices, and the U.S. Department of Justice is building the largest anti-trust case in twenty years against Google.
Given the sheer scale of Big Tech corporations and their self-propelling impetus towards ever-greater market dominance, regulatory pushback was bound to happen sooner or later, and it is thus encouraging that the public conversation has already started. Make no mistake; the legislative machinery is picking up steam, and with freedom of speech and online platforms, there will likely be few quick answers or one-size-fits-all solutions.