Artificial intelligence as a legal entity?

03.08.2023

Artificial intelligence increasingly serves as a decision-making aid. But who bears responsibility for its decisions and subsequent actions? Christoph Ammon from the Institute of Criminal Law and Criminology at the Uni of Bern investigates this.

Monika Kugemann

Read the original uniAKTUELL article here.

Christoph Ammon reads a book
Christoph Ammon thinks that the rapid development of AI requires a rethinking of many aspects of the legal responsibility of technology and humans (© CAIM, University of Bern).

What is your research about?

Fundamentally, it is about the extent to which current technology is shifting the interaction between humans and machines. Until now, this has taken place in a clear subject-object relationship: Machines help us to perform tasks faster, more efficiently or qualitatively better – as in the case of a drill, for example. Now we are moving towards a constellation in which human decision-making ability is transferred or delegated to machines. With my research, I would like to contribute to sharpening the senses for what is happening technologically here right now and for the fact that legal regulation is absolutely necessary to steer this development into socially acceptable channels.

What kind of decisions are we talking about here?

With big data, such as those common in digitized medicine, artificial intelligence (AI) can be used to make the decision for a course of action. For example, it can point to a specific diagnosis or a specific way to perform a medical intervention. Furthermore, intelligent surgical technology can act more and more autonomously. The question is thus increasingly: What is still the responsibility of the human – and what of the machine?

What is ultimately still the responsibility of humans – and what is the responsibility of machines?

Christoph Ammon

Why is this such a difficult topic, especially in medicine?

The licensing of medical technology is already broadly regulated: When a solution comes onto the market, it must be at least as good as a specialist in terms of sensitivity and specificity. A physician using this technology, for example, is somewhat limited in his or her decision-making ability because of knowing: statistically, the tool is more accurate than I am. This may inhibit the physician from deviating from the tool’s recommendation, in effect constituting a shift in action from the human being to the medical product. But legally, the physician bears the responsibility.

Why is this a problem?

If something goes wrong, product liability law already exists today, which makes it possible to hold companies liable in the event of a product defect. But if neuronal networks, whose internal effects we do not understand, make decisions then we have a problem with access to the manufacturer. Because it is unclear what caused the error.Was an incorrect data set used for training, was the AI incorrectly calibrated, or is the entire company strictly liable? Perhaps only fully explainable AI applications should be allowed to be used in areas with high-risk decision making such as the medical field?

We need to accommodate technological developments from a legal perspective.

Christoph Ammon

What would be the most sensible solution from your point of view?

One possibility would be to define the AI as a functional legal entity. Then we would have a legal entity that is primarily liable and that could be prosecuted under civil law. For example, by means of an insurance solution financed by all parties involved: manufacturers, users or a liability fund to which payments are made in advance. This would also have a regulatory effect because high-risk AI would no longer be economically viable. Whether this would make sense in contrast to the current "risk-based approach" of the EU with its AI Act is addressed in my thesis. However, it is not primarily the civil liability question that I aim to answer, but the fundamental question of a possible legal status for AI – also with regards to criminal liability.

What is your current assessment?

Today, technological development creates entities that can act in a legally relevant way. As early as 2017, the European Parliament proposed defining tools that perform actions independently as “electronic persons”. At that time, however, this was met with great rejection because many sensed anthropomorphism, i.e. the transfer of human characteristics to machines. But at least the current technology has no will or intent of its own. Therefore, analogies, if any, are more likely to be drawn with existing "artificial" legal entities such as corporations. Still, we need to have a fundamental discussion about how we want to stake out this very open field.

What are the key challenges?

Overall, political aspects are also at stake, which are difficult to separate from the legal ones. Currently, many problems arise around copyright and intellectual property law: Can an AI itself be the author of an intellectual creation? What about the copyrights of the people whose works were used for the AI's training or input? Should we have some kind of "watermarking" in data sets to distinguish what comes from a machine and what from a human? We need to accommodate these technological developments from a legal perspective. This also involves creating trust in a world where fiction and truth are blurred.


ABOUT THE PERSON

Christoph Ammon studied law at the Universities of Fribourg, Bern and the University of British Columbia, Vancouver. After completing his legal training and passing the bar exam in the Canton of Bern, he returned to the University of Bern at the end of 2020 as a doctoral student to build on the results of his master thesis. In 2024, he plans to spend a year as a visiting research student at the University of California, Berkeley.

chevron-down