Blog Social

Algorithms, AI, truth, and trust: why social platforms should be worried about the riots

Cover image for Algorithms, AI, truth, and trust: why social platforms should be worried about the riots

Photo: Hans Vivek

Photo of Hanna Kahlert
by Hanna Kahlert

Social platforms have had a rocky history with misinformation, from US election cycles to social unrest. Over time, platforms have alleged new restrictions and guidelines intended to help reduce the risks. Yet their efficacy is under more scrutiny than ever. 

Recent surges of misinformation have resulted in outright violence at historic levels across the UK. Another incident at the Olympics has spiralled into public controversy and fervent criticism, driven by false information about a competitor in a match. In both cases, well-known social media figureheads have been responsible for spreading the perceptions, and the algorithms on the platforms themselves have amplified their voices. When a trend goes viral because an algorithm predicts it will be popular, it is one thing. But when those trends spiral into physical violence and criticism of global organisations on the back of make-believe presented as fact, it becomes a different situation entirely. 

Social platforms need to be increasingly wary of these trends, as it will affect legislation and their own usage down the line. So what causes escalations like this to happen, from a social perspective? 

Format plays a role, especially social video, which is consumed faster and more often than any other form of content on social, and is still relatively new in the grand scheme of things. These videos are primarily made and shared mostly by a niche creative consumer class, rather than formal outlets with rigorous fact checking processes, which impacts the quality of information most users are exposed to. Algorithmic ‘filter bubbles’ are also critical. Two people may sit side by side on their phones, and yet their algorithms may display entirely different versions of events presented by entirely different creators. Even if they watch the same video, they will likely see different comments underneath, and walk away believing that the general consensus is wildly different. Every tool for critical thinking and interpretation is undercut by algorithms determined to super-serve content that will either make viewers happy (and thus, more likely to re-watch) or angry (and thus more likely to rage-comment). 

A deeper issue, however, is trust in the digital world itself. Every picture could be AI generated, every piece of text could be written by a marketer, and every video could be edited. A week could pass, and the source could be labelled problematic and discounted. As a result, people are sometimes more likely to trust a clip they have seen in a thousand videos on TikTok or posts on Facebook (all posted by different creators, giving it the sheen of “peer confirmation”) than one official news outlet. 

Social platforms have so far been able to walk a careful line between neutrality – allowing users to post whatever they want – and prodding algorithms to promote content that serves the platforms’ interests. Yet, with riots and social unrest on their hands, especially in such a big election year, they will need to tread carefully. It is one thing to ‘build fast and break things’; it is quite another when the thing you’re breaking is national stability. Regulators will be on high alert, and new governments will have entire terms ahead of them to address issues. If social platforms will not intervene themselves, then legal forces will likely compel them to do so in the near future. Coming under such scrutiny is not a good position to be in, at a time when consumers are already tending towards reducing their screen time

Competitors are already emerging in the social space, offering human curation and algorithm-free content feeds. Fragmentation of the marketplace is more likely than any new platform becoming dominant, but equally, fragmentation means that the current major platforms have ground to lose if challenged. There is a growing need for better, proactive fact checking by platforms. Moreover, rather than super-serving the individual at all costs, transparency around why certain content is shown and more consistent organisation of comments sections would go a long way towards building trust and reducing division. 

Social platforms need to decide where they stand – either they are entertainment companies that can monetise as such, which comes with curation responsibilities, or they are purveyors of human connection and interaction and therefore need to promote healthy discourse. In either case, they need to be more wary of their role in misinformation. They cannot continue to offer individualised reality as entertainment and feign surprise when problems arise, without inviting serious questions they may not like the answers to.

The discussion around this post has not yet got started, be the first to add an opinion.

Newsletter

Trending

Add your comment