Several constituents have emailed me to ask if I would set out in this column my views in The Online Safety Bill.
The Bill Has completed its passage through the Commons and is now going through its detailed committee stage in the Lords, where debate rages over the extent to which the details of mandatory age verification should be set out on the face of the Bill or in subsequent regulations.
The Government has added offences the Bill which include revenge porn; hate crime; fraud; and threats and incitement to violence. Also, the promotion or facilitation of suicide and people smuggling. Further amendments will include controlling or coercive behaviour, in recognition of the specific challenges women and girls face when going online.
Ofcom, the UK’s independent communications regulator, will oversee the regulatory regime, backed up by mandatory reporting requirements and strong enforcement powers, including blocking access to non-compliant sites from within UK, and eye-watering fines of up to ten per cent of annual worldwide turnover.
That said, when the Bill was first published my principal worry was that, in its laudable intent to protect children, it nevertheless came at the expense of free speech.
At the heart of the problem was the creation a new category of ‘legal but harmful’ content, empowering lawful content to be removed when considered ‘harmful’. In effect, this subcontracted power of censorship to service providers and raised all sorts of questions about what constitutes a ‘harmful opinion’. It would also have introduced a ridiculous situation where censorship online could exclude opinion that would remain perfectly legal if said or published in print.
Mercifully the ‘legal but harmful’ provisions from the Bill has been struck out.
Instead, content that is illegal will be required to be removed along with any legal content that a platform provider prohibits under its own terms of service. Adults can choose whether to engage with legal forms of abuse and comment that fall short of being unlawful, so long as the platform they are using allows such content. It is a free market.
Arguably, the changes to the Bill have ensured that it substantially protects free speech while holding social media companies to account for their promises to users, guaranteeing that users will be able to make informed choices about the services they use and the interactions they have on those sites.
No solution is ever perfect. The provider retains the ability to silence lawful content which breaches its own rules. In effect it can exclude arguments of which it disapproves. I do not see a way around this. After all, it is the provider’s own platform and they can make their rules in the same way that publishers impose their own standards. Indeed, nobody has an absolute right to see their opinions printed in any particular newspaper or print publication.
In mitigation we will have to rely on the powers of the regulator to see that providers apply their rules properly and fairly. In addition, there will be reputational damage to providers who capriciously censor content when other publications draw attention to it.
A concern that I have which does not fall within the scope of the bill which may have a chilling effect on lawful expression, is a ruling by the Commons that social media content generated by Members of Parliament shall be subject to the scrutiny and ruling of the Commissioner for Standards in Public Life. How long will it be before this official tells MPs what they can and cannot say on social media?