Artificial intelligence (AI) technologies are creating great opportunities for improvements to customer experience and satisfaction in financial services. From automated recommendations and offers of deeper market insights, to insurance quotes and chatbots, AI is increasingly becoming a common feature of the sector.
As financial services providers continue to develop and implement AI-enabled products and services, they should consider putting in place protections to ensure that financial consumer rights are maintained. We explore some of these considerations below.
1. Consumers want transparency
Regulators across the world are converging on practices which promote transparency. Informing customers when a decision is being made using AI may be key to this. Financial services providers should consider how best to inform customers at an early stage that AI is being used and how such notifications can be provided in a clear, concise and unambiguous manner.
In the UK, the Financial Conduct Authority (FCA) exercises a number of powers under the Consumer Rights Act 2015 in respect of the fairness of contractual terms and service contracts. This means that providers need to consider how these obligations apply to informing consumers about AI use. Communications with consumers should be prominent, in plain and intelligible language, and brought to the consumer’s attention in a way that would make the average consumer aware of what is being communicated.
Internationally, supervisory and regulatory bodies are advocating a high level of transparency regarding AI use in financial services. For example, the European Commission’s High Level Expert Group (HLEG) on AI states in it ethics guidelines on trustworthy AI that the distinction between humans and AI should always be clear, while the Federal Trade Commission (FTC) in the US advises businesses to ‘be careful not to mislead consumers about the nature of the interaction’, at risk of facing FTC enforcement action.
Where personal data is being processed using AI, financial services providers must also comply with data protection requirements, such as notifying consumers of the presence of automated decision making in regions where the General Data Protection Regulation (GDPR) applies.
2 – Consumers want choice
In discussions on potential AI regulation, the extent to which consumers can make informed choices and be given the option to opt out of AI-enabled products and services has been prominent. Being aware of when customers are comfortable with decisions being made by AI versus those which are not, and building parameters to deal with this where this is a practical option, will help to build customer confidence in AI enabled products and services.
Financial services providers may therefore wish to consider how often and when they seek consent from customers and the extent to which they can provide customers with choice over whether AI is used to make decisions which may have a legal or financial impact on a customer’s personal life. This is another area of rule setting that is becoming more consistent internationally. For example, the Financial Stability Board has highlighted the need for providers to consider asking ‘for an active consent’ in many cases where AI is used.
3 – Consumers want an explanation
Explaining how automated decisions are made may give consumers more confidence that such decisions are appropriate and that the information they have provided has been used fairly. However, providing explanations in this context is challenging, particularly where it is unclear how an AI decision was made or where unexplainable “black-box” AI is used.
Financial services providers should ensure that explanations given are appropriate to the individuals to whom they are provided. In the UK, the Information Commissioner’s Office, in its draft guidance on the AI Auditing Framework, suggests that a suitable explanation does not necessarily involve disclosing the algorithms or models used to make a decision, but rather explaining to individuals how a decision was made based on the data provided and the effect that the decision may have on the individual may be more appropriate. The FCA has referenced the idea of “sufficient interpretability” which focuses on explaining the main drivers behind a decision while accepting that not all decisions, human, AI or otherwise, may be explainable in an absolute sense.
In the US, the FTC provides useful guidance on the extent to which decisions should be explained when providing credit: ‘companies are required to disclose to the consumer the principal reasons why they were denied credit. It’s not good enough simply to say “your score was too low” or “you don’t meet our criteria.” You need to be specific (for example, “you’ve been delinquent on your credit obligations” or “you have an insufficient number of credit references”)’.
Explainable AI can also help financial services providers trace how a decision was made when dealing with consumer complaints or claims later down the line.
4 – Consumers value human intervention
A common view on human intervention in AI systems is that humans must always be seen to be ‘ultimately responsible for, and able to overrule, decisions that are taken’: the European Parliament’s Committee on the Internal Market and Consumer Protection states as much in relation to decisions made in the banking sector.
Consumers may feel that decisions made using AI are unsuitable, incorrect or simply do not meet their expectations. In these cases, it should be easy for consumers to query AI decision making or ask for a decision to be retaken (either by the AI system or otherwise). Where there is solely automated processing of personal data, which is likely in most uses of AI in a financial services context, the GDPR gives data subjects the right ‘to obtain human intervention’, ‘express his or her point of view’ and ‘to contest the decision’.
Some suggested levels of human intervention are:
(a) ‘human-in-the-loop’ – this involves human intervention at every stage of the AI lifecycle;
(b) ‘human-on-the-loop’ – this involves human intervention at the design stage of the AI system and subsequent monitoring of the system’s operation; and
(c) ‘human-in-command’ – this involves a person having oversight of the overall activity of the AI system, determining when the system should be used, and how to use the system in particular situations. This approach also includes establishing different levels of human discretion during use of the system and allowing for the ability to ‘overrule’ AI decision making.
The European Commission’s HLEG on AI sets out in its Trustworthy AI Assessment List the need for financial services providers to consider whether the humans involved in each of these approaches have been given specific training on exercising oversight of an AI system; the potential need for implementing ‘detection and response mechanisms for undesirable adverse effects’ of the AI for the end user; and ensuring ‘a ‘stop button’ or procedure to safely abort an operation when needed’.
5 – Consumers want access to redress and complaint procedures
Providers should consider whether their current redress procedures are sufficient to deal with complaints or claims relating to AI-enabled decisions. As well as developing adequate redress measures, various regulators have taken the view that it is just as important that consumers are made aware of their options for submitting claims and complaints and that such options are always accessible for consumers.
The European Parliament’s Committee on the Internal Market and Consumer Protection suggests that, in order to remedy possible mistakes in AI decision making, review procedures are needed within business processes and that ‘it should be possible for consumers to seek human review of, and redress for, automated decisions that are final and permanent’.
To ensure that review processes are sufficient, financial services providers may also need to consider how mistakes can be corrected beyond the traditional redress options given to consumers (such as retaking a decision or offering financial compensation¬) so as to avoid a repeat of such mistakes. For example, where a decision is considered to be discriminatory, having a process in place to check for bias in the training data used by the AI to make that decision may provide some evidence of steps taken towards addressing the underlying concerns. The European Commission HLEG on AI suggests that auditability of decision making (such as traceability of the development process, sourcing of training data, and logging of processes, outcomes and positive and negative impacts) as well as allowing third party audits, may facilitate implementation of processes to correct issues that give rise to consumer complaints.
While the implementation of AI may give rise to customer complaints or claims, the use of AI may also be useful in assisting with responses to customer complaints. The European Federation of Banking, for example, has highlighted that most natural language processing technologies are helping banks ‘automatically classify large volumes of unstructured text documents and categorize hundreds of thousands of queries into types and ensure they are routed to the right team for resolution’. However, consideration should be given as to whether it is appropriate for financial services providers to use AI to assist with dealing with AI-related complaints or claims.
Luke Scanlon is head of fintech propositions at Pinsent Masons and advises some of the world’s leading technology companies, banks and fintech businesses on a range of fintech and legal technology related issues.
Posted in Miscellaneous