After much fanfare, many additions and two select committee reports, the Online Safety Bill has been introduced to the House of Commons. This is the first step in its passage through the UK parliament to become law. The UK government has made significant changes since it published the draft Bill including:
- Bringing paid-for scam adverts on social media and search engines into scope to combat online fraud.
- Making sure all websites which publish or host pornography, including commercial sites, put robust checks in place to ensure users are 18 years old or over.
- Adding new measures to clamp down on anonymous trolls to give people more control over who can contact them and what they see online.
- Requiring companies to proactively tackle the most harmful illegal content and criminal activity more quickly.
- Criminalising cyberflashing.
It will also place requirements on social media firms to protect journalism and democratic political debate on their platforms. News content will be exempt from any regulation under the Bill. The government has made other key changes, which are set out below.
Criminal liability for senior managers
The Bill gives Ofcom powers to require tech companies to provide it with information and data, including on the role of their algorithms in selecting and displaying content, so it can assess how they are shielding users from harm.
Ofcom will be able to enter companies’ premises to access data and equipment, request interviews with company employees and require companies to undergo an external assessment of how they are keeping users safe.
The Bill was originally drafted with a power for senior managers of large online platforms to be held criminally liable for failing to ensure their company complies with Ofcom’s information requests in an accurate and timely manner. In the draft Bill, this power was deferred and so could not be used by Ofcom for at least two years after it became law. The revised Bill reduces the period to two months to strengthen penalties for wrongdoing from the outset.
Additional information-related offences have been added to the Bill to toughen the deterrent against companies and their senior managers providing false or incomplete information. They will apply to every company in scope of the Online Safety Bill. They are:
- offences for companies in scope and/or employees who suppress, destroy or alter information requested by Ofcom;
- offences for failing to comply with, obstructing or delaying Ofcom when exercising its powers of entry, audit and inspection, or providing false information;
- offences for employees who fail to attend or provide false information at an interview.
Falling foul of these offences could lead to up to two years in imprisonment or a fine.
Under the Bill, Ofcom must treat the information gathered from companies sensitively. For example, it will not be able to share or publish data without consent unless tightly defined exemptions apply, and it will have a responsibility to ensure its powers are used proportionately.
Changes to requirements on ‘legal but harmful’ content
Under the draft Bill, ‘Category 1’ companies – the largest online platforms with the widest reach including the most popular social media platforms – must address content harmful to adults that falls below the threshold of a criminal offence.
Category 1 companies will have a duty to carry risk assessments on the types of legal harms against adults which could arise on their services. They will have to set out clearly in terms of service how they will deal with such content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to make this clear.
The agreed categories of legal but harmful content will be set out in secondary legislation and subject to approval by both Houses of Parliament. Social media platforms will only be required to act on the priority legal harms set out in that secondary legislation, meaning decisions on what types of content are harmful are not delegated to private companies or at the whim of internet executives.
The government says that this will remove the threat of social media firms being overzealous and removing legal content because it upsets or offends someone even if it is not prohibited by their terms and conditions.
The move aims to help uphold freedom of expression and ensure people remain able to have challenging and controversial discussions online.
The DCMS Secretary of State has the power to add more categories of priority legal but harmful content via secondary legislation should they emerge in the future. Companies will be required to report emerging harms to Ofcom.
Proactive technology
Platforms may need to use tools for content moderation, user profiling and behaviour identification to protect their users.
Additional provisions have been added to the Bill to allow Ofcom to set expectations for the use of these proactive technologies in codes of practice and require companies to use better and more effective tools, if necessary.
Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any technologies they develop meet standards of accuracy and effectiveness required by the regulator. However, Ofcom will not be able to recommend that these tools are applied on private messaging or legal but harmful content.
Reporting child sexual abuse
A new requirement will mean companies must report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency. It will replace the UK’s existing voluntary reporting regime. Reports to the National Crime Agency will need to meet a set of clear standards to ensure law enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content.
In-scope companies will need to demonstrate existing reporting obligations outside the UK to be exempt from this requirement, which will avoid duplication of company’s efforts.