Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
Ethics applied to AIS: the need to step back from discourse

With the arrival of Google BERT in 2018, DALL-E or LaMDA in 2021, Midjourney and above all ChatPGT in 2022, generative artificial intelligence systems (Gen AIS) have become an unavoidable topic in just a few years.

The advent of Gen AIS has obviously been accompanied by ethical questions about the potential impact of this new family of AI. However, despite the wealth of information and thought available, it has to be said that the quality is not always up to scratch and that the discussions surrounding Gen AIS are more about ethical 'noise' than rigorous and constructive debate. 

Yet the purpose of ethical questioning is to inform decision-making, to mediate between humans and technology to ensure that the latter is developed, deployed and used within a framework of moral acceptability and desirability.

This is illustrated by various types of production on the impact of AIS on employment, as well as on AIS presented as either an existential threat or a panacea. This hyperproduction on ethics applied to AIS (EA2AIS) should also be seen in the light of a counterproductive polarization artificially opposing technophobia and technophilia, based on a superficial understanding of the ethical issues associated with AIS. 

In fact, it seems that the discourse on EA2AIS in general, and on Gen AIS in particular, is particularly constrained and limited in quality, and is all too often reduced to repetition or banalities.

The lack of depth and perspective in EA2AIS discussions also encourages ethereal debates conducted at an inappropriate level of abstraction. 

This observation is not neutral. It has a serious impact on the development, deployment and use of AIS as a whole, and of Gen AIS in particular, notably by prohibiting in-depth questioning and limiting the field of possibilities to (pre-)formatted perspectives. 

It is, in our view, this lack of reflective depth in the application of ethics to AIS that explains, at least in part, what the World Economic Forum has identified as the intention/action gap, i.e. the gap that exists between the adoption by organizations of the requirements set out in the Ethics guidelines for Trustworthy AI, released in 2019 by the European Commission, and their operationalisation. The operationalisation of ethics is the key to beneficial AIS with controlled impacts.

The gap between the high level of abstraction at which the requirements and other principles intended to provide a framework for AIS are established, and the fact that they are rooted in the concrete reality of companies, makes it virtually impossible to operationalise ethics beyond compliance. 

Concepts such as transparency and trust are too polysemous, complex and vague to be operationalised. At most, they can be ideals that need to be tested against reality before they can even be contemplated. The term transparency alone can be understood in various ways. It can refer to access to the content of programmes in order to understand how algorithms work and therefore explain the results produced. It also refers to the need to inform users interacting with an AIS that they are in the presence of a technical object and not a human being. Beyond that, there are dozens of definitions of the term transparency. What is true of transparency is true of all the terms used in the EA2AIS sphere. 

In fact, when it comes to applying a recommendation or a requirement, companies must first fight against a lack of definitional clarity, before striving to make the notion applicable to a concrete use case.

Added to this is the question of the relevance both of concepts such as trust or responsibility applied to AIS, and of their relevance in terms of their application to certain sectors.

If, as Thomas Metzinger asserted, that trustworthy AI is "conceptual nonsense", it is also questionable in its effect, since the establishment of a relationship of trust may result in a lowering of vigilance on the part of the user of these systems. Thus trust in AIS applied to the medical or military sectors is likely to have unfortunate consequences in certain cases. The establishment of "trustworthy AI", presented as a principle, can therefore become a real problem rather than an advantage. The same applies to transparency, which can sometimes be counter-productive, even dangerous.

Clearly, the vocabulary and narrative surrounding EA2AIS poses a fundamental problem: how to extract ourselves from the dominant discourse in order to carry out a rigorous analysis, as objective as possible, of the impact of AIS and, as a result, put in place effective strategies to limit the risks and capitalize on the benefits. Current issues around productivity, environmental impact and the impact on employment illustrate perfectly the importance of questioning the narrative in order to reach informed decisions that are relevant to organizations. 

Anticipating a potential gain in productivity generated by AIS is a risky business. Especially as the notion of productivity is very often associated in a restrictive way with the time saved by certain AIS in certain specific cases. But productivity is a complex concept that cannot be reduced to time savings. Furthermore, while all hypotheses are possible, their effectiveness is not guaranteed. Measuring the productivity gain of a new technology cannot be predicted absolutely, and requires time to assess its value effectively. Finally, it should be noted that the available evidence is currently too disparate and heterogeneous to allow us to conclude with any certainty that the use of AIS will save time. 

The same applies to environmental impact. Irrespective of the numerous studies tending to emphasize the negative impact of AIS on the environment, it appears that the issue is still treated in an absolute manner, using elements whose coherence is not always guaranteed. The lack of transparency of certain organizations, for example, makes it impossible to arrive at definitive results. In addition, the absolute impact of AIS needs to be combined with a relative impact, in order to determine the weight of AIS in environmental issues compared with other technologies, or even other human activities. In the absence of a more rigorous study, organizations run the risk of missing the real impacts and over- or underestimating them, with the result that decisions are taken on the basis of erroneous or superficial considerations.

The impact on employment follows the same logic, with the debate polarized around two discourses. The alarmist view is that AIS will replace humans in virtually every sector of activity. The other, which is optimistic, believes that the jobs lost will be offset by new jobs being created, and that AIS will enable humans to increase their capabilities, or even free them from thankless tasks. The reality is certainly somewhere between these two positions. Peremptorily asserting that AIS will never replace humans, or on the contrary that it will replace them, fails to grasp the complexity of the issues behind the question of the direct and indirect impact of AIS on employment.  

By keeping these issues at a very high level of abstraction and approaching them from an absolute angle, organizations are depriving themselves of a detailed understanding of the issues involved. In so doing, they run the risk of steering their strategies in the wrong direction.

The Generative AIS Working Group (Gen AIS WG), set up in January 2024 by the Human Technology Foundation, has clearly taken the measure of these challenges and the importance of switching from an abstraction-level approach to a granular approach based on the study of organizations' use cases and the assessment of their positive and negative impacts. Through the identification of use cases in a broad cartography, a taxonomy of impacts and their prioritization, the Gen AIS WG has highlighted the pitfalls associated with not taking into account the weight of the narrative in perceptions on the subject. 

The establishment, as of 24 June 2024, of a Targeted Working Group (TWG) on the environmental impact of Gen AIS, demonstrates both the importance of a more detailed analysis of the subject and the well-understood interest of the members of the SIA Géné WG in benefiting from this analysis to guide their strategies appropriately.

Human Technology Foundation, June 25th 2024

Further readings

Anne Skeet and Jim Guszcza. How businesses can create an ethical culture in the age of tech. World Economic Forum, January 7, 2020. https://www.weforum.org/agenda/2020/01/how-businesses-can-create-an-ethical-culture-in-the-age-of-tech/

Groupe d’experts de haut niveau sur l’IA. Lignes directrices en matière d’éthique pour une IA digne de confiance. Commission Européenne, 2019. https://digital-strategy.ec.europa.eu/fr/library/ethics-guidelines-trustworthy-ai

Noman Bashir, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti. The Climate and Sustainability Implications of Generative AI.  An MIT Exploration of Generative AI, March 30, 2024. The Climate and Sustainability Implications of Generative AI

Nicolas van Zeebroeck. IA : promesses de productivité, apocalypse pour l’emploi ? The Conversation, 3 juin 2024. https://theconversation.com/ia-promesses-de-productivite-apocalypse-pour-lemploi-230480

Fabien Toux. Groupe de travail sur les impacts des systèmes d’IA génératives. Human Technology Foundation, 22 mai 2024. https://www.human-technology-foundation.org/fr-news/groupe-de-travail-sur-les-impacts-des-systemes-dia-generatives  

Emmanuel R. Goffi, Victor de Salin, Aymeric Thiollet, Hugo Mottard et Fabien Toux. Systèmes d’IA Génératives. Identification, classification et gestion des impacts positifs et négatif - Rapport intermédiaire. Human Technology Foundation, mai 2024. https://www.human-technology-foundation.org/fr-news/rapport---systemes-dia-generatives

Related Articles