The social reckoning is nigh: The Haugen testimony and Facebook blackout hint at imminent regulatory changes
Photo: Jeremy Bezanger
The race between social media (and broader tech) innovation and governmental regulation has been a bit of a hare and the tortoise situation for the past decade-plus. However, the digital-first transition as a result of; COVID-19 lockdowns; several election and voting processes being impacted by social media misinformation; and viral vaccine scepticism currently threatening our return to ‘normal’ is finally pushing regulation to catch-up.
In September, China passed regulations on entertainment – limiting gaming for under-18s and cracking down on the endless scroll and black-box algorithms, which have repeatedly been pointed to as culprits for the rise of misinformation and poor mental health. The US has considered regulation in the past, with last year’s congressional hearings over Facebook looking at the potential anti-trust violations of the company and its three core social assets (Facebook, Instagram and WhatsApp). India has also been at odds with WhatsApp recently, requesting access to chat histories to trace back the sources of viral articles inciting acts of violence.
Cue this week when Facebook suffered its worst-ever outage. Monday’s five-hour total blackout of all Facebook-hosted platforms was the most serious yet (with reports of employees being unable to even get into the building to try to fix the problem because the servers were down, adding a comedic tinge to what was otherwise a disaster). In addition to the lost revenues during this time, the widespread impacts were profound. While in the West most users of the platforms turned, instead, to the likes of Twitter and Reddit, countries like India, where WhatsApp is the default communication tool for families, and in Indonesia, where Facebook’s internet.org provides free internet to many, and is thus their portal into the digital world, the impacts were far more severe. Facebook Inc.’s operational responsibility for WhatsApp, alongside Instagram and the Facebook platform, looked, this week, like a global communications vulnerability rather than a strength.
The timing of this debacle was also notable, given that this week the Facebook whistle-blower, Frances Haugen (a former product manager at the company), was set to testify before a senate subcommittee. She alleged that the company structure incentivises metrics over people, resulting in product development which can harm the mental health of children and young adults who are being actively incentivised, despite being verbally discouraged. It is not new that the addictive properties of social media can have harmful effects on mental health, but this is the first time a Facebook employee has come forward to say that this not only well known internally, but also deliberately under-addressed.
Featured Report
Visionary audio Unlocking the power of video in podcasting
YouTube may be the only viable platform for long-form video podcasts, but that does not mean audio-first podcast platforms should abandon video. Instead, podcast platforms should leverage video both as...
Find out more…Furthermore, algorithms promoting content that will drive audiences to engage more on a platform often rely upon the sensational – i.e., those favouring conspiracy theories, emotional rather than factual reporting, and content which can compound mental health issues or compulsions. Haugen’s testimony revealed that despite Facebook’s best intentions to police misinformation on the platform, such efforts have difficulty keeping up with the sheer amount of content out there. Undoubtedly, this is a serious problem that all social media companies must confront; however, as the largest company in the space, the onus falls particularly on Facebook.
Intriguingly, the focus of this congressional hearing was not on anti-trust. In fact, Haugen advised against disaggregating the company as a solution, saying “the systems will continue to be dangerous even if they’re broken up". She has also not shared the documents submitted to the committee with the Federal Trade Commission (FTC), which is investigating the company for anti-trust. The legal conversation is now evolving, from asking simply ‘how does Facebook make money?’ to an informed, nuanced discussion of the impacts that it can have on users and societies. This is now extending into a search for solutions to the problems that are being identified.
Regulation seems to be on its way to address issues of transparency, accountability, and strategic practices. Already in the UK, Ofcom has issued new guidance to video sharing platforms on how to protect their users from harmful online content. This guidance is set to be superseded by the Government’s Online Safety Bill, which is currently before parliament.
Social networks have long held the line that they are impartial platforms, not responsible for the content that their users make or share. The past year has eroded the perceived legitimacy of this perspective, with the platforms beginning to step up and address the spread of misinformation. However, there is a growing politically-driven consensus that more must be done, both by the companies and by regulators. While on the consumer side, little is liable to change – online age restrictions remain prone to being bypassed by digitally savvy younger web users, and consumers will continue to use the social network that connects them to their friends, despite specific feature iterations. However, a rethink is required at the corporate strategic level on how to combat misinformation where it starts, and how to build products that minimise harming the mental health of users as a side effect of driving engagement. A failure to do so, on the part of the leading social media platforms, will be a de-facto invitation for governments to accelerate regulatory intervention on the behalf of consumers.
The discussion around this post has not yet got started, be the first to add an opinion.