The House of Lords Communications and Digital Committee has published its report on large language models and generative AI.
It says that the UK government’s approach to AI and large language models (LLMs) has become too focused on a narrow view of AI safety. The UK must rebalance towards boosting opportunities while tackling near-term security and societal risks. Otherwise, it will fail to keep pace with competitors, lose international influence and become strategically dependent on overseas tech firms for a critical technology.
The report warns about the “real and growing” risk of regulatory capture, as a multi-billion pound race to dominate the market deepens. Without action to prioritise open competition and transparency, a small number of tech firms may rapidly consolidate control of a critical market and stifle new players, mirroring the challenges seen elsewhere in internet services.
The Committee welcomes the UK government’s work on positioning the UK as an AI leader, but says a more positive vision for LLMs is needed to reap the social and economic benefits, and enable the UK to compete globally. Key measures include more support for AI start-ups, boosting computing infrastructure, improving skills, and exploring options for an ‘in-house’ sovereign UK large language model.
The Committee considered the risks around LLMs and says the apocalyptic concerns about threats to human existence are exaggerated and must not distract policy makers from responding to more immediate issues.
The report found there were more limited near-term security risks including cyber attacks, child sexual exploitation material, terrorist content and disinformation. The Committee says catastrophic risks are less likely but cannot be ruled out, noting the possibility of a rapid and uncontrollable proliferation of dangerous capabilities and the lack of early warning indicators. The report called for mandatory safety tests for high-risk models and more focus on safety by design.
The Committee calls on the government to support copyright holders, saying the government “cannot sit on its hands” while LLM developers exploit the works of rightsholders. It rebukes tech firms for using data without permission or compensation, and says the government should resolve the copyright dispute “definitively” including through legislation if necessary. The report calls for a suite of measures including a way for rights-holders to check training data for copyright breaches, investment in new datasets to encourage tech firms to pay for licensed content, and a requirement for tech firms to declare what their web crawlers are being used for.
The Committee has made ten core recommendations. These include measures to boost opportunities, address risks, support effective regulatory oversight – including to ensure open competition and avoid market dominance by established technology giants – achieve the aims set out in the AI White Paper, introduce new standards, and resolve copyright disputes.