Jan Hořeňovský: “Artificial Intelligence is Still Largely Unchartered Territory”

obr.

Jan Hořeňovský: “Artificial Intelligence is Still Largely Unchartered Territory”

Jan Hořeňovský, doctoral student at the Department of Politology and Sociology and researcher at the Department of Civil Law, studies the relationship between artificial intelligence and law. He applies his research in his dissertation and also in the article Negative Human Rights as a Basis for Long-term AI Safety and Regulation, which he published with Ondřej Bajgar in the Journal of Artificial Intelligence Research (JAIR), an internationally recognised American journal.


In your publication, you explore the connections between law and artificial intelligence, and you are also deeply invested in the long-term safety and regulation of AI. Could you describe your work in more detail for us?

The article you are referring to deals with a widely discussed topic these days, namely ensuring AI safety. In most cases, “law and AI” are considered from the perspective of the regulation of the developers, liability, etc. Our paper goes a step further – we are trying to find a way to ensure respect for negative human rights on the level of individual advanced AI systems. It is not primarily about the regulation of the conduct of individuals or legal entities, but rather about artificial intelligence itself, which is still largely unchartered territory.

How should AI be regulated specifically from the legal perspective? What do you propose?

Most of the current proposals regarding AI take the form of relatively complex codes regulating the use of AI systems. Our proposal differs in that it concerns a relatively narrow subset of highly abstract principles – in our case, we have very good reasons to focus on negative human rights. Briefly put, we are proposing that in advanced AI systems, the positive function of what the system should do be separated from the safety mechanism, which should be based on relatively abstract principles.

To give you an example, the primary function of many existing algorithms used on social media is to keep users online as much as possible. There are many different ways to do that: some of them are harmless (such as showing interesting content), while others are highly problematic. For example, showing links to conspiracy videos and then intensifying the user’s paranoia is one such problematic method. And this is exactly where our mechanism would come in handy because it would exclude those strategies that violate negative human rights, in this case the freedom of thought, from the set of strategies whose purpose is to fulfil the primary function of keeping people online.

In the introduction to your article, you mention the calls for advanced AI to learn human values. Which values? Different people and different societies hold different values. Might international conventions or agreements be of any help?

This is exactly why we avoid the term “human values” in our proposal. We believe that this word is simply to ambiguous, and that’s why it is not suitable for the regulation of AI systems. The same goes for ethical theories, which also differ greatly. In contrast, the concept of negative human rights is better suited for this purpose because these rights are defined more clearly, and there are quite good learning resources available, such as the existing case-law, scholarly literature, etc. But we narrow it down even further, to the decision of a specific decision-making body – in the manner of legal realism. In our proposal, compliance or non-compliance is derived from the decision of a specific body. Such decision-making also brings the binarity between lawful and unlawful, and is therefore suitable for the paradigm of machine learning.

Has regulation been already introduced in some countries? If so, what can be done if individual countries or societies decide to regulate AI in different ways?

First drafts of regulations already exist, such as the Artificial Intelligence Act. However, one needs to realise that even without complex regulation, AI is not unregulated. In many different areas, it is governed by existing regulations, such as consumer protection, personal data protection, etc. Also, new drafts often overlap unnecessarily with the old regulations. Take, for example, the prohibition of using AI for the mass surveillance of citizens by the public authorities. Do you think something like this is not provided for in the existing legal regulation, for example, the Charter of Fundamental Rights and Freedoms?

Thank you, that’s a very telling example. Do you deal with any other related issues in your work? For example, technical design?

Yes, our paper is half legal and half technical, so the question of technical implementation is very important. I think this is the direction research in this area is heading in. However, it is important to realise that multidisciplinary research poses certain problems – it is sometimes necessary to simplify the output significantly to be comprehensible for all target groups, and not all reviewers are fans of this approach.

Your article was published in April in a foreign journal. Which one?

We got the article published in the Journal of Artificial Intelligence Research (JAIR), an American journal dealing with all aspects related to artificial intelligence. The last time I looked, I believe it was the eighth-ranked open access scientific journal about artificial intelligence in the world. It is actually published by a non-profit organisation called the AI Access Foundation, whose purpose is to facilitate the dissemination of scientific results in artificial intelligence. It is basically a gesture of defiance against large publishing companies like Elsevier, whose conduct often crosses the line of ethics (for instance, by charging very high fees for open access). And since the community studying AI is quite progressive in this respect, it has founded its own journals, including this one.

Artificial intelligence is in the spotlight these days. I’m sure that many students and academics will be soon looking into this topic and publishing on the legal aspects of AI. But you’ve been studying this phenomenon for some time, right?

Yes, this is one of the topics of my dissertation, which should be completed in the near future. However, the dissertation deals with the relationship between law, society, and new technologies from a broader perspective. In fact, we spent about three years working on this specific paper, which is quite a lot. It is also interesting to watch how the topic has been growing in relevance. Just three years ago, it was basically a theoretical issue, while today it is discussed in the media on an almost daily basis.

There must be abundant literature available on the topic. Have you considered existing publications and articles in your paper? If so, what did you find to be most useful or important, and, in contrast, do you see any weak points?

Yes, our article includes quite an extensive overview of the existing literature. I think that the current literature has identified the potential risks very well. It is not that strong in finding working solutions that could be used not only from the legal, but also from the technical perspective.

Thank you very much for the interview and telling us about your work.

Thank you.


Author of the interview: 

BcA. Pavel Nesit, editor of the Department of Communication and Public Affairs