The UK Government’s full response to the Online Harms White Paper, outlined today by the Home Office and Department for Digital, Culture, Media & Sport, is well-intentioned and its scale – in some areas – admirable. In other areas, however, the proposals are vague and worryingly restricted.
Under the proposals, digital companies will have a duty of care to protect their users online. They will be held responsible for both the illegal and legal (but harmful) content that appears on their platforms.
Fair Vote UK welcomes the scale of the fines (up to £18 million or 10% of global annual turnover, whichever is the higher) that the newly appointed regulator Ofcom will be able to administer. The differentiated landscape – with the largest tech companies given extra responsibilities – is also to be welcomed.
There are significant areas of ambiguity and weakness, however, that civil society should focus on reforming before the bill is put to Parliament next year.
- Too much responsibility is being delegated to big tech. Allowing the likes of Facebook to decide what is the “legal but harmful” content allowed on their site is essentially the system we have now and it is one that doesn’t work.
- The proposed exemption for journalistic content (and the comment sections on journalistic content) is fraught with problems and will blur/complicate regulation.
- Only including “disinformation and misinformation that could cause significant harm to an individual” is too narrow. This will exempt disinfo/misinfo that, for example, is harmful to democracy at large or to minority and marginalised groups.
- Using artificial intelligence to moderate content is not a panacea and should not be celebrated as so. The training and resourcing of human moderators should be prioritised.
- It is still unclear what exactly is meant by “harmful content” and what will fall within its parameters. We need a better idea of this before the legislation is put to Parliament next year.
The EU Commission today has also outlined comprehensive new regulations for the tech world and they provide a useful point of comparison. Companies operating within the EU will soon, like in the UK, be required to do more to prevent the spread of hate speech.
Yet the Commission’s proposals go much further than the UK’s in many areas. The differences can perhaps best be described as one of focus. The UK’s Online Harms legislation will purely concentrate on the content that is hosted on tech platforms. The EU, in contrast, is targeting the industry’s problematic operation as a whole; anti-competition, monopolistic tendencies, advertising etc.
The two new regimes are similar in that key questions remain unanswered with regard to enforcement. Both policy proposals have lofty ambitions but the devil is in the details and we will continue to monitor how these proposals evolve through their respective legislative processes.