Hybrid Ethics for Generative AI: Some Philosophical Inquiries on GANs

  • Antonio Carnevale Norwegian University of Science and Technology (NTNU)
  • Claudia Falchi Delgado Maastrich University, Department of Technology & Society Studies, DEXAI – Etica artificiale
  • Piercosma Bisconti Sant’Anna School of Advanced Studies, DEXAI – Etica artificiale
Keywords: Generative Adversarial Networks (GANs); Ethics; Deepfake; Perception and trustworthiness of AI-based system; Hybrid socio-technical systems


Until now, the mass spread of fake news and its negative consequences have implied mainly textual content towards a loss of citizens' trust in institutions. Recently, a new type of machine learning framework has arisen, Generative Adversarial Networks (GANs) – a class of deep neural network models capable of creating multimedia content (photos, videos, audio) that simulate accurate content with extreme precision.

While there are several areas of worthwhile application of GANs – e.g., in the field of audio-visual production, human-computer interactions, satire, and artistic creativity – their deceptive uses, at least as currently foreseeable, are just as numerous and worrying. The main concern is linked to the so-called “deepfakes”, fake images or videos that simulate real events with extreme precision. When trained on a human face, GANs can make the face assume hyper-realistic movements, expressions and (verbal and non-verbal) communication abilities. This technology poses an urgent threat to the governance of democratic processes concerning the production of public opinions and political discourses, with significant potential for reality-altering and disinformation.

After a short introduction of their current technical state-of-the-art, in this paper we want to enquire the GANs` socio-technical system alongside different and intertwined philosophical accounts. Firstly, we will argue about the conditions that make perceived a GANs-generated content as trustworthy, arguing also about the general effects GANs might have on the perceived trustworthiness of individuals. Thereafter, we will discuss about the inadequacy to approach GANs only as perception-altering technology. Against this backdrop, we will propose a theoretical turn that considers the human-machine relationships of trustworthiness as elements of a broader hybrid socio-technical systems. This turn come up with political repercussions that we will discuss in the last part of the paper.

How to Cite
Carnevale, A., Falchi Delgado, C., & Bisconti, P. (2023). Hybrid Ethics for Generative AI: Some Philosophical Inquiries on GANs. HUMANA.MENTE Journal of Philosophical Studies, 16(44), 33-56. Retrieved from https://www.humanamente.eu/index.php/HM/article/view/434