Ofcom has published its first-edition codes of practice and guidance on tackling illegal harms, such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide, under the UK’s Online Safety Act.
The Act places new safety duties on social media firms, search engines, messaging, gaming and dating apps, and pornography and file-sharing sites. It requires Ofcom to produce codes of practice and industry guidance to help firms to comply, following a period of public consultation.
Ofcom says that it has carefully considered responses to the consultation on the draft consultations and strengthened some areas of the codes since the initial consultation.
Every site and app in scope of the new laws has until 16 March 2025 to complete an assessment to understand the risks illegal content poses to children and adults on their platform.
Subject to Ofcom’s codes completing the Parliamentary process by this date, from 17 March 2025, sites and apps will then need to start implementing safety measures to mitigate those risks, and the codes set out measures they can take. Some of these measures apply to all sites and apps, and others to larger or riskier platforms. The key changes that sites and apps need to make are:
- Senior accountability for safety. Each provider should name a senior person accountable to their most senior governance body for complying with their illegal content, reporting and complaints duties.
- Better moderation, easier reporting and built-in safety tests. Tech firms will need to make sure their moderation teams are appropriately resourced and trained and are set robust performance targets, so they can remove illegal material quickly when they become aware of it, such as illegal suicide content. Reporting and complaints functions must be easier to find and use, with appropriate action taken in response. Relevant providers will also need to improve the testing of their algorithms to make illegal content harder to disseminate.
- Protecting children from sexual abuse and exploitation online. Ofcom’s final measures are explicitly designed to tackle pathways to online grooming. This will mean that, by default, on platforms where users connect with each other, children’s profiles and locations – as well as friends and connections – should not be visible to other users, and non-connected accounts should not be able to send them direct messages. Children should also receive information to help them make informed decisions around the risks of sharing personal information, and they should not appear in lists of people users might wish to add to their network. The codes also expect high-risk providers to use automated tools called hash-matching and URL detection to detect child sexual abuse material (CSAM). These tools allow platforms to identify large volumes of illegal content more quickly, and are critical in disrupting offenders and preventing the spread of this seriously harmful content. This includes smaller file hosting and file storage services, which are at particularly high risk of being used to distribute CSAM.
- Protecting women and girls. Women and girls are disproportionately affected by online harms. Under Ofcom’s measures, users will be able to block and mute others who are harassing or stalking them. Sites and apps must also take down non-consensual intimate images (or “revenge porn”) when they become aware of it. Following feedback, Ofcom has also provided specific guidance on how providers can identify and remove posts by organised criminals who are coercing women into prostitution against their will. It has also strengthened its guidance to make it easier for platforms to identify illegal intimate image abuse and cyberflashing.
- Identifying fraud. Sites and apps are expected to establish a dedicated reporting channel for organisations with fraud expertise, allowing them to flag known scams to platforms in real-time so that action can be taken. Ofcom has expanded the list of trusted flaggers.
- Removal of terrorist accounts. It is very likely that posts generated, shared, or uploaded via accounts operated on behalf of terrorist organisations proscribed by the UK government will amount to an offence. Ofcom expects sites and apps to remove users and accounts that fall into this category to combat the spread of terrorist content.
Enforcement powers
Ofcom says that it will offer support to providers to help them to comply with these new duties. However, it also warns that it is “gearing up to take early enforcement action against any platforms that ultimately fall short.” Under the Act, Ofcom has the power to fine companies up to £18m or 10% of their qualifying worldwide revenue, whichever is greater, and in very serious cases it can apply for a court order to block a site in the UK.
Future developments
Ofcom is going to carry out a further consultation on further codes measures in Spring 2025. This will include proposals in the following areas:
- blocking the accounts of those found to have shared CSAM;
- using AI to tackle illegal harms, including CSAM;
- use of hash-matching to prevent the sharing of non-consensual intimate imagery and terrorist content; and
- crisis response protocols for emergency events (such as last summer’s riots).
As well as this, Ofcom is planning the following:
- January 2025: final age assurance guidance for publishers of pornographic material, and children’s access assessments;
- February 2025: draft guidance on protecting women and girls; and
- April 2025: additional protections for children from harmful content promoting, among other things – suicide, self-harm, eating disorders and cyberbullying.