Human development has always been accompanied by technological tools that have made life easier. However, these tools have also consistently raised ethical dilemmas in their use.
For example, the individual clock secularized time. Cars and airplanes provided unprecedented mobility but were also used for warfare, as was the case with the internet. The Manhattan Project with the atomic bomb is another relevant example. We can also mention space exploration and nanotechnology, all driven by various interests. Currently, a new technology has emerged, and ethical debates revolve around it. Of course, we are talking about Artificial Intelligence (AI).
The webinar “The Ethical Dilemma in Artificial Intelligence (AI)” presented by the National Institute of Research and Fraud Prevention (INIF) and the Center for Applied Ethics at the University of the Andes in Colombia, addressed this topic. The panel of expert speakers included Freek Van Laar, Director of Digital Transformation at Interlat; Catalina Bernal, an economist and physicist from the University of the Andes with complementary studies in Biology, specializing in Biophysics; and Manuela Fernández, Associate Professor in the Department of Philosophy and the Center for Applied Ethics at the University of the Andes.
Let’s begin with some questions that arose and were discussed during the debate: What do we not know about artificial intelligence? What negative consequences can it bring? Who will benefit and who will be harmed? How is this new technology similar to and different from previous ones?
Freek Van Laar, for example, mentioned ethics and culture, both concepts unique to humans, that AI can emulate. If it is difficult for humans to be ethical, how can AI be expected to be ethical? Manuela Fernandez poses an even more philosophical question: “Should we consider AI as a moral agent?”
“Ethics is the evaluation of human behavior. We do not assign this value to animals, but AI has the potential to be a moral agent,” explained the researcher. “How do we think about unethical decisions? Do we program them with the solution or program them to seek the solution?” she wondered. “How do humans make moral decisions? It’s a discussion of the philosophy of artificial intelligence, and the issue is that we cannot agree on it,” she concluded regarding the problems.
As we can see, there are certain questions that can be extrapolated to any new technology. In this case, however, there is a particularity: AI thinks, knows, and makes decisions. “Ethics in AI becomes relevant when machines make decisions,” noted Catalina Bernal. “It’s not only about what they decide but also how they decide. Where does the information come from? Why was the decision made?” the economist and physicist further questioned.
Once the questions are posed, certain problems are also identified. For example, there would be those harmed in the job market. During the webinar, Manuela Fernández also mentioned unequal access to technology. “Who does it serve? Which communities are benefited and harmed?” she questioned, referring to the fact that the technological divide is often associated with age or socioeconomic status and could widen further.
We also need to address authorship. When an artificial intelligence creates a text, should it be attributed to it? Can it be cited? Should it assume responsibility? When considering intellectual property, resolving this issue is crucial.
And, of course, the most problematic issue revolves around data. What privacy does AI provide, and how does it use the data provided by each user? “It becomes a black box,” said Fernández. “We don’t know how it made the decision. Should they be explainable? How can we give out data without knowing what will be done with it?” she continued.
Additionally, there is a risk of algorithmic bias as databases have inherent biases. Catalina Bernal provided an example: if, for instance, AI looks at the historical data of how a company hires, it may negatively affect historically marginalized populations, such as women or people of African descent, and even potentially discriminate against the young or elderly. Freek, in light of all these problems, also pointed out the responsibility of organizations: “Few of them are reflecting on these issues”.
The discussion did not end there. The speakers also proposed certain strategies to address the dilemmas and problems. Just as there are new and old problems, there can be new and old solutions. Looking at what has been done before, for example, regulations could be implemented.
It is important, first and foremost, to “begin thinking about moral principles for artificial intelligence. Some institutions already focus on responsibility, explainability, or transparency,” noted the philosopher. “Processes should be known, and they should be fair,” Bernal added.
Furthermore, the context should be considered. “Technologies are developed in specific contexts,” said Fernández. When the Manhattan Project was developed, there was a particular interest. Now, AI has been developed in private companies with potentially commercial interests. Therefore, “the end values certain things over others,” and this should not be overlooked.
Freek Van Laar considered that the cultural component should not be ignored. Our own culture needs to be evaluated. “We need a digital culture,” he specified. In this regard, he pointed out that the current level of maturity is low. Therefore, he gave two pieces of advice: first, raise awareness within organizations about these issues, and second, provide the tools to promote the necessary ethical behavior. Above all, “it’s about being aware.” We should not evade dilemmas or problems. We should address them and develop strategies to confront them, but, above all and foremost, we should seek to understand what is happening with Artificial Intelligence, culture, ethics, and humans today.