The Home Office and Department of Media, Culture and Sport have jointly published an Online Harms White Paper for consultation. In summary, the proposals are that social media companies and tech firms will be required by law to protect their users. If they do not comply, they will face tough penalties. An independent regulator will be introduced.
The consultation seeks views on various aspects of the government’s plans for regulation and tackling online harms, including:
- the online services in scope of the regulatory framework;
- options for appointing an independent regulatory body to implement, oversee and enforce the new regulatory framework. The regulator will be funded by industry in the medium term, and the UK government is exploring options such as an industry levy to put it on a sustainable footing.;
- the enforcement powers of an independent regulatory body eg issuing substantial fines, blocking access to sites and potentially to impose liability on individual members of senior management;
- potential redress mechanisms for online users; and
- measures to ensure regulation is targeted and proportionate for industry.
The aim is that a range of harms will be tackled as part of the measures outlined in the White Paper, including inciting violence and violent content, encouraging suicide, disinformation, cyber bullying and children gaining access to inappropriate material. The press release includes an illustrative, but not exhaustive, table of the harms covered. There will be requirements for companies to take tougher action to ensure they tackle terrorist and child sexual exploitation and abuse content.
The proposed laws will apply to any company that allows users to share or discover user-generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms, file hosting sites, public discussion forums, messaging services, and search engines.
Measures set out in the White Paper include:
- A new statutory ‘duty of care’ to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.
- Requirements on tech companies to ensure child abuse and terrorist content is not disseminated online.
- Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.
- Requiring companies respond to users’ complaints, and act to address them quickly.
- Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.
- A new “Safety by Design” framework to help companies incorporate online safety features in new apps and platforms from the start.
- A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing (luring people into relationships by using a fake online persona), grooming and extremism.
Alongside the White Paper, the UK government has also published an updated Digital Charter and so-called RESIST toolkit, which enables organisations to develop a strategic counter-disinformation capability. The toolkit is primarily a resource for public service communications teams and it equips people with the knowledge and skills to identify, assess and respond to disinformation.
The consultation ends on 1 July 2019.