The UK government has published its long-awaited Online Safety Bill (in draft), along with explanatory notes, for pre-legislative scrutiny. It will be considered by a committee of MPs before being formally introduced to parliament. There have been calls for more regulation of internet companies due to the increasing levels of online abuse.
The Bill seeks to establish a new regulatory regime to address illegal and harmful content online, with the aim of preventing harm to individuals in the UK. It imposes duties of care in relation to illegal content and content that is harmful to children on providers of internet services which allow users to upload and share user-generated content and on providers of search engines which enable users to search multiple websites and databases.
The Bill confers powers on Ofcom to oversee and enforce the new regulatory regime (including dedicated powers in relation to terrorism content and child sexual exploitation and abuse content), and requires Ofcom to prepare codes of practice to assist providers in complying with their duties of care. The Bill also expands Ofcom’s existing duties in relation to promoting the media literacy of members of the public. Ofcom will be given the power to fine companies failing to comply with their duty of care up to £18 million or ten per cent of annual global turnover, whichever is higher, and it will also have the power to block access to sites.
Duty of care
Following the government’s response to the Online Harms White Paper, all companies coming within scope of the new rules will have a duty of care towards their users. This will require them to consider the risks their sites may pose to the youngest and most vulnerable people and they must act to protect children from inappropriate content and harmful activity.
They will need to take robust action to tackle illegal abuse, including swift and effective action against hate crimes and harassment and threats directed at individuals.
The largest and most popular social media sites (which will be designated Category 1 services) will need to act on content that is lawful but still harmful, such as abuse that falls below the threshold of a criminal offence, encouragement of self-harm and mis/disinformation. Category 1 platforms will need to state explicitly in their terms and conditions how they will address these legal harms and Ofcom will hold them to account.
The draft Bill contains reserved powers for Ofcom to pursue criminal action against named senior managers whose companies do not comply with Ofcom’s requests for information. These will be introduced if tech companies fail to comply with their new responsibilities. A review will take place at least two years after the new regulatory regime is fully operational.
The legislation will also contain provisions that require companies to report child sexual exploitation and abuse content identified on their services. This aims to ensure that companies provide law enforcement with the high-quality information they need to safeguard victims and investigate offenders.
Freedom of expression
The Bill aims to ensure that people in the UK can express themselves freely online and participate in pluralistic and robust debate.
All companies in scope will need to consider and put in place safeguards for freedom of expression when fulfilling their duties. These safeguards will be set out by Ofcom in codes of practice but, for example, might include having human moderators take decisions in complex cases where context is important.
People using their services will need to have access to effective routes of appeal for content removed without good reason and companies must reinstate that content if it has been removed unfairly. Users will also be able to appeal to Ofcom and these complaints will form an essential part of Ofcom’s horizon-scanning, research and enforcement activity.
Category 1 services will have additional duties. They will need to conduct and publish up-to-date assessments of their impact on freedom of expression and demonstrate they have taken steps to mitigate any adverse effects.
These measures aim to ensure that online companies do not adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.
Democratic content
Category 1 services will have a duty to protect content defined as ‘democratically important’. This will include content promoting or opposing government policy or a political party ahead of a vote in Parliament, election or referendum, or campaigning on a live political issue.
Companies will also be forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation. Policies to protect such content will need to be set out in clear and accessible terms and conditions and firms will need to follow them or face enforcement action from Ofcom.
When moderating content, companies will need to take into account the political context around why the content is being shared and give it a high level of protection if it is ‘democratically important’.
The DCMS gives the example of a major social media company choosing to prohibit all deadly or graphic violence. A campaign group could release violent footage to raise awareness about violence against a specific group. Given its importance to democratic debate, the company might choose to keep that content up, subject to warnings, but it would need to be upfront about the policy and ensure it is applied consistently.
Journalistic content
Content on news publishers’ websites is not in scope. This includes both their own articles and user comments on such articles. Articles by recognised news publishers shared on services that are in scope will be exempt and Category 1 companies will now have a statutory duty to safeguard UK users’ access to journalistic content shared on their platforms.
This means they will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists’ removed content, and will be held to account by Ofcom for the arbitrary removal of journalistic content. Citizen journalists’ content will have the same protections as professional journalists’ content.
Online fraud
Measures to tackle user-generated fraud will be included in the Bill. It will mean online companies will, for the first time, have to take responsibility for tackling fraudulent user-generated content, such as posts on social media, on their platforms. This includes romance scams and fake investment opportunities posted by users on Facebook groups or sent via Snapchat. Romance fraud occurs when a victim is tricked into thinking that they are striking up a relationship with someone, often through an online dating website or app, when in fact this is a fraudster who will seek money or personal information.
Fraud via advertising, emails or cloned websites will not be in scope because the Bill focuses on harm committed through user-generated content.
The UK government says that it is working closely with industry, regulators and consumer groups to consider additional legislative and non-legislative solutions. The Home Office will publish a Fraud Action Plan after the 2021 spending review and the DCMS will consult on online advertising, including the role it can play in enabling online fraud, later this year.
Territorial extent
The Online Safety Bill extends to the whole of the UK.