Points de vue
Can the internet really be policed?
mai 7, 2019
It’s been heralded as the means to making the UK, “the safest place in the world to be online”, but are the measures proposed in the UK Government’s Online Harms White Paper sufficient or even feasible? What are the implications for free speech, and when does protection veer into censorship? It’s a complex issue, yet these are the questions that will need to be answered in the coming months and years.
Internet content can be harmful
I think we can all agree that some dark, harmful forces exist in the real world and flourish online. The grooming of children, and the spread of terrorist propaganda online are well documented, and thus measures to curb these evils are welcome.
Social media companies, previously so persistent in defending their role as a neutral platform, rather than publisher, have gradually accepted more responsibility as public and political sentiment has soured against them. Now, with the threat of fines, and regulation stretching so far as to make repeated wrongdoers’ platforms and sites inaccessible to Brits, social media and tech giants are greatly incentivised to more closely monitor the activity taking place under their watch.
Is it feasible?
One area where the white paper is light on detail is how these protective measures might be achieved from a practical standpoint. The government seems to presume that social media giants already have the ability to entirely police their platforms, but choose not to. Current monitoring methods rely on a combination of human and algorithmic reporting, which fail in two key areas: humans are slow to act in an environment where content is shared almost instantly; bots are easily fooled.
In reality, policing all the content on a platform with the breadth and reach of Facebook would take a great deal of money and time. Neither the government nor the platform owners are likely to want to spend this themselves, so where will this money come from?
What of free speech?
Many detractors of the white paper have honed in on its presentation of the blurred boundary between content that is illegal and that which is harmful. Whilst much content and activity mentioned is plainly illegal – child abuse content, incitement to commit acts of terrorism – the white paper also refers to material such as “intimidation, disinformation, the advocacy of self-harm”, that is harmful, but not illegal. The issue here is that the government plans to handle both in the same way.
Jim Killock, the executive director of the Open Rights Group, highlights the lack of nuance displayed in the white paper: “The government is using Internet regulation as a blunt tool to try and fix complex societal problems. Its proposals lack an assessment of the risk to free expression and omit any explanation as to how it would be protected.”
For free speech activists, the white paper signals a slippery slope to censorship. They argue that self-same treatment of that which is deemed harmful versus that which is illegal is a first step towards the powers that be determining we the public can’t access certain content online ‘for our own good’.
More nuance needed
Whether you believe this is a necessary step towards stopping the chaotic mess of social media spilling out into the real lives of vulnerable people, or a dangerous blow to freedom of speech and an increasingly empowered nanny state, ultimately the white paper looks half-baked.
The fine lines between illegal and harmful content need to be defined more clearly, the moderation techniques outlined and addressed, and a funding model properly proposed. Protecting the public, particularly our most vulnerable members, from harmful content online is an incredibly important issue, and one that deserves a better plan.