×
 
 Back to all blogs

Artificial Intelligence and Legal Responsibility

 

Wed, 03/11/2020 - 12:00

0%

The use of Artificial Intelligence (AI) is bound to give rise to difficult questions of law. In particular, this includes questions about the allocation of legal responsibility or liability allocation of legal responsibility for wrongs committed in the course of utilising AI. These are worth exploring in view of the sharp increase in research and investment in AI-related technologies and the plan to integrate AI into various aspects of society. Because of how AI is defined, as an artificial personality capable of thought and decision-making, difficult questions arise in relation to the legal treatment of AI, especially concerning matters of liability. These questions transcend multiple areas within the legal discipline–such as contracts, torts and crime, as well as intellectual property.

Law as regulation

Law is often described as a regulatory ‘tool’ capable of engaging in social engineering. It interacts with society, with people and technology. Society shapes the law and law shapes society: it is always a two-way stream. Although traditionally, the regulatory effect of the law was perceived as originating from the narrow confines of legislation (Morgan & Yeung, 2007), such as penal legislation that serves to punish acts that threaten, harm, or otherwise endanger life or property, this traditional view has been challenged by regulatory scholarship that foresaw the closing of the public-private divide (Cane, 2002; Baldwin, Cave & Lodge, 2012). The assumption that the state has the primary standing in “articulating the collective goals of the community” breaks down in light of the increased participation of non-state actors in shaping behaviour through private litigation (Morgan & Yeung, 2007). Irrespective of the erosion of the public-private divide in the regulatory effect of the law, what makes the law an efficient regulatory tool is its ability to hold persons or legal entities, that flout the norms or standards regarded as acceptable in society, ‘responsible’ or ‘liable’ for their actions or omissions. This is visible in every area of specialisation within the legal discipline.

Legal responsibility

Almost three decades ago, a judge in New York said that “[c]omputers can only issue mandatory instructions – they are not programmed to exercise discretion” (Pompeii Estate Inc v Consolidated Edison Co,1977). However, this statement has long been proven to be wrong. AI is made possible by software that mimics the decision-making process of the human mind (Blodgett, 1987). Yet, unlike humans, AI does not take a physical form and, therefore, is not within reach of legal sanction or punishment in the traditional sense. How ought the law to deal with situations where AI-led processes give rise to civil or criminal wrongs? Who is legally responsible or liable for such wrongs?

Take Self-Driving Vehicles (SDVs), for example. The proposed use of SDVs is one of the many initiatives of the Smart Nation Initiative of the Singapore Government. Although there might be a perceived notion that computers are not prone to error, there are instances when things could go wrong – as they did in October 2016, when an SDV collided with a lorry in the One North area. In such a situation, the law must be able to impose liability on the party legally responsible for the wrong that was committed. While this incident was a minor one with no casualties, when AI is integrated into high-risk activities, the question of legal responsibility becomes a vital matter for consideration. An example of such a high-risk activity is the use of AI for air traffic control; this is going to become a reality as universities have partnered with government agencies to research the use of AI in this field. However, fusing traditional notions of legal responsibility to AI-driven activities raises difficult questions.

Challenges posed by AI

With most criminal offences (e.g. reckless driving), and torts (e.g. negligence), the wrongdoer’s state of mind becomes a determinant factor for the courts in imputing legal responsibility. Yet when AI is involved, how does one determine its ‘state of mind’? In cases where the state of mind become relevant, judges have long employed the test of the ‘reasonable man’ in determining whether the defendant concerned had acted in an objectively reasonable way. Yet, doctrinally, it may not be possible to impose on AI the test of the reasonable man. Even if a criminal or negligent state of mind can be established, how can an artificial construct devoid of physical form or feeling be made legally responsible?

Rather, is it the case that those who built the AI are responsible for its wrong? For instance, can it be said that manufacturers of autonomous vehicles are responsible for the wrong decisions on the part of the software that employs AI? Yet, given that AI, by definition, evolves and arguably has a mind of its own, to what extent can its actions be attributed to those of its makers? In this regard, and if we are to consider AI and its creator as two distinct entities, theories of accessorial liability could have a role to play as regards the allocation of liability on the latter for the former’s acts or omissions. However, there are limitations and strict constraints within which accessorial liability is imposed, ‘foreseeability’ and ‘causation’ being determinant factors (Honoré, 1997; Cooper, 2017). To what extent can it be said that the creator of a technology embedding AI had caused, if not foreseen, the wrongs committed by the AI when its intelligence evolves and is no longer bound by the algorithms of its creator? Perhaps in such a case, the lawyer defending the AI’s creator may plead novus actus interveniens: as the AI’s mind could have disturbed the chain of causation, in that the creator’s contribution to building the AI is no longer the effective and dominant cause for the wrong, but rather it is the AI itself that could be clothed with responsibility.

From another perspective, is the device or machine possessing the AI to be treated as an employee of its creator? This, perhaps, might be an attractive argument: vi-carious liability on employers for wrongs committed by their employees are not imposed on the basis that the former somehow had contributed to the latter’s wrong, but given the legal relationship between the two parties (Horsey & Rack-ley, 2015). Yet most employment statutes define ‘employee’ to mean a ‘person’, i.e. in general terms a human. As such, the avenue of vicarious liability on the superficial belief that AI must be the employee of its creator breaks down under existing notions of the law.

Thus, theoretically, there are challenges in approaching the question of legal responsibility in respect to wrongs committed by AI. Grappling with the question of legal responsibility is important, as unless liability can be imputed on some party, the actions or omissions of AI will remain beyond legal sanction. While SDVs and AI-based air traffic control provide instructive examples of AI and the potential issues of liability, there are many areas for which AI may be employed in the future. Therefore, it is important to reconsider traditional legal theories and doctrines pertaining to ‘legal responsibility’ or ‘liability’ in the context of a world driven by AI. There are no easy answers, but answers must be found.

Live with AI gathers thought leaders from France and Singapore to lead research projects on the positive impact of AI to our society. SGInnovate is a partner of this initiative.

You may download the latest white paper here.

Technology:
Previous

Integrating Innovation Efforts for the Future Of Food

Nov 11, 2022

Next

Unblocking the Blockchain: 3 Things You Need to Know

Nov 11, 2022