We are slowly seeing the emerging trend of organisations considering the use of generative technologies across all areas of business. Collectively known as ‘generative AI’, these technologies (such as the popular Chat-GPT and Dall-E) are capable of taking a prompt from its user and creating entirely new content, such as blog posts, letters to clients, or internal policies.
In a previous article, we examined several points that organisations should consider, such as potential for IP infringement and inadvertent PR issues. The article goes on to consider several steps organisations can take to mitigate these risks throughout the process, such as regular testing and ensuring appropriate safeguards are put in place. As will be clear for those who have already interacted with these technologies, while there is certainly value in implementing them within certain processes, these safeguards are clearly a necessary step to ensure that the AI is behaving accurately and, in the case of written works, in a way that is not misleading.
The need for these internal processes can be displayed using a simpler riddle. For this example, Chat-GPT was asked the following:
‘Mark has three brothers: Peter, Alex, and Simon. Who is the fourth brother?’.
The AI quickly, though inaccurately, responded that there was no fourth brother even though, as would be obvious to a human, Mark is clearly a fourth brother.
While the error in this example is humorous, and without consequence, the severity changes when using generative AI for more technical or material uses. The creation of news articles informing the public of political updates, for example, would clearly require that the information is accurate and reliable. The same could be said for the creation of a letter regarding a failure to adhere to contractual terms or service levels. It is clear that a number of outcomes may materially mislead parties (whether mistakenly or otherwise) to their detriment which, in turn, may lead to concerns of dishonesty, fraud, and misinformation.
Is it therefore necessary to go beyond internal processes and explicitly indicate that certain works or products have been created by AI? Some believe so.
In December 2022, the Cyberspace Administration (“CAC“), China’s governmental body responsible for oversight over the internet, issued regulations prohibiting the creation and distribution of AI-generated content without clear labelling, such as watermarks. In a news post supplementing the regulations, the CAC highlighted that, in response to the increasing use of ‘digital synthesis technology’, users must be protected from malicious actors who seek to disseminate information that counterfeits the works of other or intentionally defrauds and misleads them. This is by no means a novel consideration, as similar markings have already been implemented across several social media platforms, indicating that certain content is either likely to be a bot or contain misinformation.
Over in the US, in late 2022 the White House issued a blueprint that will be used to form the foundation for an AI Bill of Rights. Unlike the regulations in China, these are currently non-bindings building blocks to allow state legislators to determine how to best regulate the matter. The blueprint does not provide specific steps that can be taken to ensure protections for citizens are put in place. Due to this absence, many organisations have considered the requirement of identifying marks for AI-generated works themselves. OpenAI, the developer of Chat-GPT, have previously indicated that they are already working on a way to “statistically watermark the outputs” that are created. It is hoped that the use of these markings will make it much more difficult to pass off the work of AI as human. In this sense, the developers are seeking to prevent parties from taking credit for work they have perhaps contributed to, but not completed themselves individually. This contrasts with the approach in China where the protections are designed to protect the users from the content itself, rather than protecting the effort and work taken to create it. Since this announcement, OpenAI have also released a tool that allows users to insert a piece of text and determine whether it was likely created by a human or AI.
Meanwhile in the EU, we are continuing to see the development of the EU’s proposed AI Regulation (“AI Act“). The AI Act will serve as an overarching regulatory environment for the block by enacting several protective obligations on vendors and users of AI. The EU’s approach assigns these technologies as ‘general-purpose AIs0, as they can be used for several different actions, rather than for one specific purpose. As the AI Act is still currently under review, it is not clear on whether they will take a strict approach, as with China, or a more guided approach as in the US to the indication of AI-generated content. Presently, however, there is a push by EU regulators to require companies to be more transparent on how their AI models work and several existing regulations that actively discourage the dissemination of misinformation. It would therefore stand to reason that we may see a more prescribed requirement to alert users to instances where they are interacting with AI-generated content. In instances where there is a chance that the content may unduly influence more vulnerable members of society, in accordance with the current provisions of the AI act, developers and organisations may even find the use of these technologies prohibited, in accordance with the provisions on prohibited practices.
What remains a consistent theme across jurisdictions is a growing push towards transparency, whether mandatory or otherwise, on whether the content being interacted with is generated by a human or AI. Unless there is a specific reason not to alert viewers/readers/users of this nature, it appears to be the most prudent step in achieving any ethical or regulatory obligation of transparency. Furthermore, the use of these indicators for AI-generated work may offer those interacting with it the opportunity to do so knowing that certain information may be inaccurate or require further investigation. Therefore, while a well-principled step in the dissemination of AI-generated content, it remains to be seen whether this will be implemented throughout industry unless obligated to do so.
Coran Darling is a Trainee Solicitor at DLA Piper LLP with experience in data privacy, artificial intelligence, robotics, and intellectual property, having joined the firm from the technology industry. He is an active member of DLA Piper’s Working Group on AI and a committee member of the Society for Computers and Law specialist group on artificial intelligences with particular interest in algorithmic transparency.