The government has announced more changes to the draft Online Safety Bill, with two new duties being imposed on platforms. The first duty will require the largest and most popular social media sites to give adults the ability to block people who have not verified their identity on a platform. A second duty will require platforms to provide users with options to opt out of seeing harmful content.
First duty – user verification and tackling anonymous abuse
The draft Online Safety Bill already places requirements on in-scope companies to tackle harmful content posted anonymously on their platforms and manage the risks around the use of anonymous profiles. Under the new duty, so-called category one platforms with the largest number of users and highest reach – and therefore considered to pose the greatest risk – must offer ways for their users to verify their identities and control who can interact with them.
This could include giving users options to tick a box in their settings to receive direct messages and replies only from verified accounts. The onus will be on the platforms to decide which methods to use to fulfil this identity verification duty but they must give users the option to opt in or out.
When it comes to verifying identities, some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify. Alternatively, verification could include people using a government-issued ID such as a passport to create or update an account.
The government says that it will not ban anonymity online entirely, as it would negatively affect those who have positive online experiences or use it for their personal safety such as domestic abuse victims, activists living in authoritarian countries or young people exploring their sexuality.
The new duty aims to provide a better balance between empowering and protecting adults – particularly the vulnerable – while safeguarding freedom of expression online because it will not require any legal free speech to be removed. While this will not prevent anonymous trolls posting abusive content in the first place – providing it is legal and does not contravene the platform’s terms and conditions – the government intends that it will stop victims being exposed to it and give them more control over their online experience.
Users who see abuse will be able to report it and the bill aims to significantly strengthen the reporting mechanisms companies have in place for inappropriate, bullying and harmful content, and ensure they have clear policies and performance metrics for tackling it.
Second duty – giving people greater choice over what they see on social media
The bill will already require in-scope companies to remove illegal content such as child sexual abuse imagery, the promotion of suicide, hate crimes and incitement to terrorism.
However, the government points out that there is a growing list of toxic content and behaviour on social media which falls below the threshold of a criminal offence but which still causes significant harm. This includes racist abuse, the promotion of self-harm and eating disorders, and dangerous anti-vaccine disinformation. Much of this is already expressly forbidden in social networks’ terms and conditions but it is often allowed to remain and is actively promoted to people via algorithms.
Under a second new duty, ‘category one’ companies will have to make tools available for their adult users to choose whether they want to be exposed to any legal but harmful content where it is tolerated on a platform. These tools could include new settings and functions which prevent users receiving recommendations about certain topics or place sensitivity screens over that content.
Ofcom will set out in guidance how companies can fulfil the new user verification duty and the verification options companies could use. Ofcom will also set out how companies will be able to fulfil the new identity verification requirements in codes of practice.