Digital recordings cannot be trusted. Artificial intelligence can be used to substitute different words and synchronised lip movements seamlessly. Professor Barry O’Sullivan was able to demonstrate this before our very eyes in his Overview of Artificial Intelligence in Dublin last Tuesday.
AI concepts are easy, at least as Barry explains them, it’s the delivery that’s complex.
The difference between traditional computer programming and machine learning is that:
- traditionally we load a software application and data into a computer, and run the data through the application to produce a result (output e.g. profit/loss number);
- artificial intelligence relies on feeding the data and desired outputs into one or more computers or computing networks that are designed to write the programme which process the data in relation to the output (e.g. feed in data on crimes/criminals and the output of whether they re-offended, with the object of producing a programme that will predict whether a given person will re-offend). The data is used to ‘train’ the computer and the programme (the artificial intelligence).
Use-cases for AI can be defined in terms of:
- Clustering – a machine learning technique that involves the grouping of data points. Given a set of data points, we can use a clustering algorithm to classify each data point into a specific group that is not predefined – the discovery of patterns.
- Classification – identifying to which of a set of categories a new observation belongs, on the basis of a training set of data containing observations whose category membership is defined or known
- Prediction – models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions (like propensity to re-offend, or creditworthiness).
@BarryOsullivan discussing #ArtificialIntelligence, bias in AI systems and the concerns around digital identities. Fantastic event and insights organised by @computersandlaw #Dublin @LexTechIreland @LemanSolicitors https://t.co/xTUbWq2EoR pic.twitter.com/8MBJJZxvbx
— Karl Manweiler (@KarlManweiler) September 17, 2019
Shortcomings
Apparent feats of artificial intelligence are usually over-hyped and ignore the vast cost in terms of electricity consumed compared to the human brain ($50m in electricity to beat a human at Go compared to the 7 watts consumed by the human’s brain).
AI is also very “brittle” and unable to cope with any scenario in which it hasn’t been ‘trained’: it is very dependent on the quantity, quality and availability of data.
Computers and artificial intelligence itself have no understanding of the world. AI cannot answer causation or counter-factual inquiries.
No artificial intelligence is 100% accurate, raising the questions of how inaccurate is it? And what are the consequences of inaccuracy?
Even more problematic is the lack of (what I call) explainability: “if a neural network wants to turn right, no one can explain why.” Barry agreed that this makes the use of AI acceptable in cases where, say, two reinsurers use it to more efficiently assess and set-off their own liability for claims, on the basis that who pays more or less will even out over time, as opposed to situations where a false negative/positive means a person loses their life, or compensation that is actually due to them or their freedom or some other fundamental right.
Bias is also a huge problem. We are often unclear about what we mean by bias – there are many different types. Bias tends to be inherent in data sets used to train AI; and it is considered mathematically impossible to remove both selection bias (accidentally working with a specific subset of a population instead of the whole, making the sample unrepresentative of the whole) and prediction bias (false negatives/positives). You might be tempted to correct prediction bias by adding a calibration layer to adjust the mean prediction by a certain percentage, but that only fixes the symptoms, not the cause, and makes the system dependent on the prediction bias and calibration layer remaining up to date and aligned over time.
Lawyers need to be engaged in AI development/deployment
Barry is very concerned that, while certain policy bodies are led by lawyers, a lot AI is actually being developed and deployed without the involvement of any legal expertise. The various shortcomings in AI explained above mean this has to change if we are to develop and use AI responsibly.
Legal issues include:
- Who is liable for the consequences of inaccuracy in AI?
- Who owns the intellectual property rights in anything created by the AI itself?
- AI is not appropriate in situations where the consequences of false positives/negatives are fatal or result in the denial of fundamental rights, compensation and so on.
- Weaponising AI:
- image recognition systems are trained on western objects, faces, so tend to be discriminatory.
- China has mini-robot scanners that children play with that are actually scanning their faces, irises etc…;
- It is possible to hack AI by, for instance:
- doctoring signs (e.g. so an electric car reads a Stop sign as a speed limit
- altering people’s appearance, to fool facial recognition into recognising them as someone else;
- doctoring digital images;
- substituting different words and synchronised lip movements seamlessly into digital moving images.
5. The European Commission is (“suddenly”) considering regulation along the lines of the “Ethics Guidelines for Trustworthy AI” which contain 7 “requirements”: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability. If explainability were mandated as part of “transparency”, for example, then no AI could meet the requirement. Instead, Barry recommends a regulatory requirement for certification of AI so that the shortcomings of each AI are known and appropriate decisions can then be made about whether and, if so, how it may be deployed.
6. AI can be used for good: Global Pulse is a UN initiative to discover and ‘mainstream’ applications of big data and AI for development and humanitarian action (UNGlobalPulse.org)
Recommended reading:
“Rebooting AI: Building Artificial Intelligence We Can Trust”, Marcus, G. and Davis, E., Random House 2019