Artificial intelligence (AI) is coming at us before we fully understand what it might mean. Established ways of doing things in areas like transport regulation, crime prevention and legal practice are being challenged by new technologies such as driverless cars, crime prediction software and “AI lawyers”.

The possible implications of AI innovations for law and public policy in New Zealand will be teased out in a new, ground-breaking Law Foundation study. The three-year multi-disciplinary project, Artificial Intelligence and Law in New Zealand, supported by a $400,000 Law Foundation grant, is being run out of the University of Otago, and is a collaboration between the Faculty of Law and the departments of Philosophy and Computer Science.

Project team leader Associate Professor Colin Gavaghan of the Faculty of Law says that AI technologies – essentially, technologies that can learn and adapt for themselves – pose fascinating legal, practical and ethical challenges. To tackle these challenges, an interdisciplinary team is essential. Associate Professor Gavaghan is joined by Associate Professor James Maclaurin from Philosophy, who brings expertise in ethics and the philosophy of science, and Associate Professor Alistair Knott, who brings expertise in Artificial Intelligence.

A current example is PredPol, the technology now widely used by Police in American cities to predict where and when crime is most likely to occur. PredPol has been accused of reinforcing bad practices such as racially-biased policing. Some US courts are also using predictive software when making judgments about likely reoffending.

“Predictions about dangerousness and risk are important, and it makes sense that they are as accurate as possible,” Associate Professor Gavaghan says. “But there are possible downsides – AI technologies have a veneer of objectivity, because people think machines can’t be biased, but their parameters are set by humans. This could result in biases being overlooked or even reinforced.

“Also, because those parameters are often kept secret for commercial or other reasons, it can be hard to assess the basis for some AI-based decisions. This ‘inscrutability’ might make it harder to challenge those decisions, in the way we might challenge a decision made by a judge or a police officer.”

Another example is the debate over how driverless cars should make choices in life-threatening situations. Recently, Mercedes announced that it will programme its cars to prioritise car occupants over pedestrians when an accident is imminent.

Associate Professor Gavaghan says this a tough ethical question.

“Mercedes has made a choice that is reassuring for its drivers and passengers, but are the rest of us OK with it? Human drivers faced with these situations have to make snap decisions, and we tend to cut them some slack as a result. But when programming driverless cars, we have the chance to set the rules calmly and in advance. The question is: what should those rules say?”

Another set of questions flows from the employment implications of AI. At least one American law firm now claims to have hired its first AI lawyer to research precedents and make recommendations in a bankruptcy practice.

“Is the replacement of a human lawyer by an AI lawyer more like making the lawyer redundant, or more like replacing one lawyer with another one? Some professions – lawyers, doctors, teachers – also have ethical and pastoral obligations. Are we confident that an AI worker will be able to perform those roles?”

He says the research team will consider the implications of AI technologies under four broad headings: responsibility and culpability; transparency and scrutiny; employment displacement; and “machine morality”.

Associate Professors Gavaghan (Director of the New Zealand Law Foundation Centre for Emerging Technologies), Knott and Maclaurin will be assisted by two post-doctoral researchers. They will examine international literature on AI, consult with international experts and study the experience of other countries, the United States in particular.

The Law Foundation is an independent charitable trust that supports research and education on legal issues. Executive Director Lynda Hagen says the AI study will be funded under the Foundation’s Information Law and Policy project (ILAPP), a recently-established $2 million fund dedicated to developing law and policy in New Zealand around IT, data, information, artificial intelligence and cyber-security.

“The AI study is among the first to be funded under our ILAPP project,” Lynda says. “New technologies are rapidly transforming the way we live and work, and ILAPP will help ensure that New Zealand’s law and policy keeps up with the pace of change.”

The AI study is the fourth approved under ILAPP. The first is examining how to regulate digital or crypto-currencies such as Bitcoin that use blockchain technology – these are poised to disrupt the finance world and beyond, and regulators like our Reserve Bank are concerned about the implications for finance system stability. The other two projects will cover “smart contracts” and the digitisation of law, and how to regulate new technologies like driverless cars, drones, Uber and Airbnb. Visit the Law Foundation for more information on these projects.