In October, Prof Margit Sutrop, head of the Centre for Ethics, and Mari-Liisa Parder, a research fellow in ethics, conducted a workshop where they discussed associations between artificial intelligence and ethics and explained why trust is a very important factor.
The workshop, put together for partners in the science project RAYUELA, began with Prof Sutrop's overview of what artificial intelligence (AI) at all is and how it can be classified. “Today’s topic is if we control artificial intelligence or it controls us,” she joked at the introduction. Prof Sutrop pointed out that in her research she most often uses the definition of the European Commission's independent High-Level Expert Group, which says that AI characterises intelligently behaving systems that analyse their environment and perform, to a certain extent, independent operations to achieve specific goals. Artificial intelligence-based systems can only be software-based and operate in the virtual world or be installed in hardware.”[1]
In the case of AIs, they are categorised into different groups based on skills – from solving a specific task to superintelligence. Prof Sutrop also outlined basic risks such as humans abusing AI, unintended consequences or AI’s own autonomous acts. It is possible that AI threatens our autonomy and social relations, affects the idea of what does it mean to be human, and also raises the value alignment problem.
Different needs
“Artificial intelligence developers ask what values should be respected? What values should be given to the AI to follow?” Prof Sutrop said. She gave a funny example she heard about an AI fulfilling only one task to illustrate this question. “Let's say you have a cooking robot at home. If its only job is to cook, but there's no food available in the kitchen, it'll make you a cat soup for dinner. We must somehow make it clear to the system that our sole purpose is not to eat, but we also have preferences to follow.”
People's preferences and values are in hierarchy, so it needs to be clarified what kind of AI we want – whether they should fulfil our primary needs or rather keep long-term goals in mind, Prof Sutrop described. “In order to answer that, what does a good life mean? What goals do we want to fulfil?” she asked.
Artificial intelligence and trust
Ethical AI is needed as it is the cornerstone of trust. Moreover, trust as a social behaviour and trustworthiness as a characteristic can be distinguished. Prof Sutrop said when we talk about artificial intelligence, one might ask whether we trust AIs as a system or whether we trust the people and institutions behind them.
“My own research shows that when we talk about trusting AI, it actually means trusting the people who create, use and govern AI. Thus, trust is not so much about the systems as people and institutions,” she explained.
The European Commission expert group has defined trustworthy AI as lawful, ethical and robust (in both technical and social terms).[1]
However, according to Prof Sutrop, there are contradictions in those guidelines: “It is a shame to say so, but to this day it is not entirely clear what ethical behaviour and values all people should respect unconditionally.” Consequently, the European Commission's guidelines identify values that are listed as absolute, while on the other hand emphasising the need to respect the pluralism of individual values and choices. “Deliberating values and considering moral decisions is easy for philosophers, but artificial intelligence creators often ask how to do it exactly,” she joked.
Ethics as part of the creation of AI
Prof Sutrop pointed out that she liked the idea of ethics by design[2], or in other words, before creating the AI, there is already a preconception of what the ethical requirements are, where there are possible threats that we try to avoid, etc. She expressed excitement that the current trend is that ethicists and engineers instantly sit together at the same table so to say.
As part of the research project RAYUELA, a serious educational game will be created as well as artificial intelligence that mimics the behaviour of teenagers in the game. Mari-Liisa Parder, a research fellow in ethics at the Centre for Ethics, introduced practical tools to the project consortium that can be applied when creating the AI. In her opinion, one of the most comprehensive is the ALTAI checklist[3], which is question-based: “For example, you need to answer the question is the AI system designed to interact with people, guide them, or make decisions that affect people or society?” She adds that, although it may seem that such questions must be answered only at the end of the development process, those questions must be kept in mind throughout the development process.
Each tool certainly has its own advantages and limitations, while the field is evolving very quickly. Parder pointed out that there are other ethics guidelines to bear in mind alongside ALTAI, such as the ethics guidelines for the autonomous and intelligent systems of ECCOLA[4] and IEEE[5].