In this article, I will briefly explain the process of transforming legislation and other legal sources into algorithms expressed as programming code, with the aim of automating the application of the law. The process implies a shift from natural language to formal programming language; however, it is often hard to express traditional legislation in terms of programming instructions. To the extent that the legislator wishes to prepare the ground for algorithmic law and automation, it may thus be argued that the style of law-making should be more ‘automation-friendly’.
An algorithm is a stepwise set of operations that perform logical and arithmetical operations in a pre-defined order. Algorithms may be carried out by a computer and be wholly or partly automatic. Of particular interest from a legal perspective is the fact that these algorithmic procedures could represent formal decision-making in individual cases, ie application of algorithmic procedures could establish legal rights and duties for citizens and businesses. There are many fields of application. In recent years, the use of algorithms on the Internet – for example, to determine search results and the price offered in online commerce – has been the object of much attention. In government administration, in contrast, algorithmic law has a history spanning more than 50 years and is as old as the computer technology itself.
When algorithms are written in a programming language, the computer will read and perform operations contained in the instructions programmed into the machine. Algorithmic law requires that valid interpretation of the law is established as part of the system design, ie long before actual cases are processed by the system. Algorithmic law ultimately involve a faceless processing of cases, untouched by human hands. Thus, this type of decision-making runs the risk of being a rigid application of the law, blind to societal development – at least until the program is changed – and changes tend to happen infrequently, as these type of systems are often very complex, and hard and expensive to alter. For these and other reasons, algorithmic law raises a policy issue: some people would reject automated decision-making because it may deprive them of their chance to argue and to insist that individual considerations be taken.
Notwithstanding possible objections, law expressed by means of algorithms, and in particular automated decision-making, is a policy that many governments prefer simply because it is often an effective and efficient way of implementing the law. In some areas of government administration, it would not have been feasible to manage without algorithms and automation. Many of the tax and welfare schemes developed from the 1960s would hardly have been possible to introduce without the resource of computers governed by algorithmic statements representing the law. However, it is important to discuss limits to automated decision-making, and in particular, to ask what should characterise decisions if machines are allowed to play an important decision-making role – or, to put it differently, when should we avoid the use of machines to automatically assess legal problems (and maybe only give advice to human decisions-makers instead)?
Computers never make the final decision: people always have the last word. Someone who has interpreted the legal sources and arrived at conclusions regarding what the rules say must decide which rules should be implemented into program code and how. They may not necessarily have a full overview of the possible outcomes of the programs they develop, but they have at least accepted that full control is beyond their capability. (Legislators also do not know every consequence of the statutes which they enact, but that does not make them less accountable for their decisions.) Thus, it is of crucial important that attention be focused on the procedures and people taking part in the activities making automated decision-making possible.
People developing systems for automatic processing of individual cases transform authentic texts of the legal sources into rules expressed in computer programs. In so doing, they must try to understand the law as a set of questions executing logical and arithmetical operations on certain well-defined data. Rules must be expressed, for example, by means of logical operators like IF, OR, ELSE, NOT, =, ?, ?. etc., and as arithmetical operators (+, -, /, x etc.). The more ‘fixed’ the facts of individual cases processed by these operators are, the higher the degree of automation possible. Thus, information about pre-established facts or facts established for several purposes would be preferred to facts established on a case-by-case basis. For instance, the legislator may define ‘partner’ as ‘a spouse, civil partner or one of a couple whether of the same sex or opposite sex who although not married to each other are living together and treat each other as spouses’ (this example is taken from the Well-being of Future Generations Act (Wales) 2015). To apply this rule, we will have to check, case by case, that the conditions are met (that the individuals actually live together and treat each other as spouses). The possibilities for automation in this case are very limited. By way of contrast, a rule based on formally established and registered, ‘fixed’ facts about who the parents of a common child are, or information in the national register about who is sharing the same accommodation, as well as other similar conditions, would prepare the ground for automation. Such a line of action would make it possible to express legal rules as algorithms and have automated, legally correct processing of individual cases about ‘partners’. In several branches of government administration in Norway, for example, public officials are actively amending the law by replacing discretionary and blurred facts with fixed facts that can be automatically accessed from machine-readable sources.
The transformation described above, from authentic legal texts to rules expressed as statements in program code, should be seen as a legal decision-making process consisting of at least nine steps (see chart below). How should we judge this process and the procedural requirements that should be linked with it, both in terms of the quality of the legal sources on which the programs are based, and the legal effect of choices made when the program was designed? If the results of the processes formally and actually represent conclusions about legal issues in individual cases rather than having an advisory function, the transformation process should be seen as particularly important. Moreover, the higher the legal uncertainty in interpreting relevant legal sources, the greater significance of the transformation process. Legislation, for instance, containing vague or undefined concepts having an uncertain logical structure, not addressing all legal problems within the domain and so on, gives plenty of leeway for interpretation and thus potentially ample opportunity for people in the transformation process to act as if they were legislators.
Transforming law into algorithms
The flowchart shows a standard model of the decision-making process from examining natural language legal texts to a final decision about putting the computer system containing formal representation of the law into operation. In the first phase of the development process, legal experts will gather all relevant legal sources, interpret them and establish valid legal rules. All interpretation problems regarding both the individual rules and the interrelationships between them must be identified and solved. These results must then expressed as a legal specification which, in the next phase, will function as instructions to computer professionals whose task is to create a formal representation in the form of required programs, tables, databases and so on.
Formally expressed rules are designed to be interpreted and executed by machines. The semantics, syntactics and volume of the code make it impossible for legal experts to check whether rules embedded in the system are actually in accordance with specifications. Thus, the specification method and techniques of dialogue between jurists and computer professionals are important. Not least, testing is crucial to warrant the conclusions that the system is in conformity with legal specifications and is therefore a proper representation of the legal sources that government may decide to put into use. Testing is carried out by means of a variety of test cases that are both handled by the system and manually processed. Because the volume and complexity of such systems are frequently very large, a large degree of uncertainty regarding legal correctness will often remain when defining myriads of possible pathways through the system.
Lack of consideration on the part of the legislator of the special requirements of algorithmic law and automated decision-making as described above, combined with government policy to digitise and automate, may be seen as creating a divide between legal and technological possibilities. Thus, project groups set to transform the law may be tempted to choose dubious interpretations to satisfy the political demand of automation. ‘Living together and treat each other as spouses’ in the example above could for instance be transformed to conditions regarding registered information about accommodation, age, children, etc. and what is believed to be safe assumptions of people’s cohabitation. Many would agree that to the extent that we need to adjust to such needs of algorithms and automation, adjustments should be made in the political rather than the bureaucratic domain. In this author’s view, in automation-friendly or automation-cautious law-making, the preferred strategy should be to make trade-offs between the requirements of more efficient government administration based on algorithmic law and an administration based on human assessments of individual cases.
Dag Wiese Schartum is professor of Electronic Government at the Norwegian Research Center for Computers and Law, University of Oslo, Norway. Last article in English: Making privacy by design operative (http://ijlit.oxfordjournals.org/).