The Science, Innovation and Technology Select Committee has launched an inquiry into social media, misinformation and algorithms. The inquiry follows the anti-immigration demonstrations and riots which took place across the UK earlier this year. Some targeted mosques and hotels housing asylum seekers, driven in part by false claims that spread on social media platforms relating to the killing of three children in Southport.
Ofcom, the regulator, has said that illegal content and disinformation spread “widely and quickly” online following the attack, and that the riots demonstrated the role that “algorithmic recommendations” can play in driving divisive narratives in a crisis period. It said the response by social media companies to this content had been “uneven”.
The Online Safety Act 2023 tightens the law on disinformation and gives providers new duties to reduce the risk that their services are used for illegal activity, and to take down illegal content.
The Science, Innovation and Technology Committee’s inquiry covers the links between algorithms used by social media and search engines to rank content, generative AI, and the spread of harmful or false content online. The inquiry will examine the effectiveness of current and proposed regulation for these technologies, including the Online Safety Act, and what further measures might be needed. It will investigate the role of these technologies in driving social harms, with a particular focus on their role in the summer 2024 riots.
The Committee is asking the following questions:
- To what extent do the business models of social media companies, search engines and others encourage the spread of harmful content, and contribute to wider social harms?
- How do social media companies and search engines use algorithms to rank content, how does this reflect their business models, and how does it play into the spread of misinformation, disinformation and harmful content?
- What role do generative artificial intelligence and large language models play in the creation and spread of misinformation, disinformation and harmful content?
- What role did social media algorithms play in the riots that took place in the UK in summer 2024?
- How effective is the UK’s regulatory and legislative framework on tackling these issues?
- How effective will the Online Safety Act be in combatting harmful social media content?
- What more should be done to combat potentially harmful social media and AI content?
- What role do Ofcom and the National Security Online Information Team play in preventing the spread of harmful and false content online?
- Which bodies should be held accountable for the spread of misinformation, disinformation and harmful content because of social media and search engines’ use of algorithms and AI?
The deadline for responses is 18 December 2024.