PERCEPTION IMAGES AND CONCEPTUALIZATION OF ANTHROPOLOGICAL CHALLENGES OF ARTIFICIAL INTELLIGENCE
DOI: 10.23951/2312-7899-2024-1-102-119
The challenges of artificial intelligence are considered from the methodological basis of bioethical analysis of anthropological risks and threats posed by new technologies. Society exhibits a cautious attitude towards artificial intelligence technology. Anthropological challenges of artificial intelligence represent a problematic situation regarding the complexity of assessing the benefits and harms, adequate awareness of the risks and threats of new technology to humans. It is necessary to conceptually outline the anthropological challenges of AI, drawing on images of AI perception represented in art and cinema, in ethical rules, philosophical reflection, and scientific concepts. In the projection of various definitions, artificial intelligence becomes a metaphor that serves as a source of creative conceptualizations of new technology. Images of AI are identified through conceptualization, visualization, and institutionalization of risks and correspond to specific types of attitudes towards innovation in society. The peculiarity of AI perception images, both in the forms of conceptualization and in the visual or institutional objectification of these images in ethical codes, is their active and purposeful formation. Analogous to the regulation of biotechnologies, normatively conceptualized positions regarding new technologies are divided into conservative - restrictive and prohibitive; liberal - welcoming innovations; and moderate - compromising, which often becomes the basis for ethical and legal regulation. However, sociological surveys show that those who welcome the emergence of neural networks, the widespread use of artificial intelligence, also exhibit caution and uncertainty in assessing the human future. A three-part typology of perception images of anthropological challenges is proposed, in which non-linear opposition of positions towards AI is fixed, but vectors of possible ways of habituating and semiotization of the future are outlined. The first, alarmist type, is distinguished based on an emotionally evaluative attitude. New technologies are seen as redundant, causing alarm and fear. The second type of perception, instrumentalist, is characteristic of AI actors within a professionally formed worldview. Some concepts of the professional thesaurus become common parlance. The third type is user-oriented. For this type, it is important how the interaction between AI and humans unfolds. The collective response to the anthropological challenges of AI is more likely to be formed on a utilitarian-pragmatic basis. Effective responses may be based on an individual self-preservation strategy, which, for example, may require adherence to cognitive hygiene in the field of education. In the context of AI development, the task arises of developing rules and procedures for such a preservation strategy.
Keywords: artificial intelligence, neural networks, AI concept, anthropological challenges, perception images, AI ethical code, humanistic expert evaluation
References:
A-AI. (2022). AI Code of Ethics. https://a-ai.ru/ethics/index.html (In Russian).
Andreev, V. et al. (2020). Artificial intelligence in the system of electronic justice by consideration of corporate disputes. Vestnik of Saint Petersburg University. Law, 1, 19–34. (In Russian).
Chernyak, L. (2019). Sexism and chauvinism of artificial intelligence. Why is it so difficult to overcome it? TAdviser.ru. February 26. http://www.tadviser.ru/index.php/Article:AI_bias (In Russian).
Dubrovsky, D. I. (2022). The development of artificial intelligence and the global crisis of earthly civilization (to the analysis of socio-humanitarian problems). Filosofiya nauki i tekhniki, 27(2), 100–107. (In Russian).
Fried, I. (2021). Google fires another AI ethics leader. Axios. Feb 20, 2021. https://www.axios.com/2021/02/19/google-fires-another-ai-ethics-leader
Gluzdov, D. V. (2022). Philosophical and anthropological foundations of the interaction of artificial and natural intelligence. Vestnik Mininskogo universiteta, 10(4), 15.
Knell, S., & Rüther, M. (2023). Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life. AI Ethics, 17 April 2023. https://doi.org/10.1007/s43681-023-00273-w
Kondakov, I. M. (2007). Psikhologiya. Illyustrirovannyy slovar’ [Psychology. Illustrated Dictionary]. Prime Eurosign.
Lauer, D. (2021). You cannot have AI ethics without ethics. AI Ethics, 1, 21–25.
Mamina, R. I., & Pochebut, S. N. (2022). Artificial Intelligence in the View of Philosophical Methodology: an Educational Track. DISCOURSE, 8(1), 64–81. (In Russian).
Melik-Gaykazyan, I. V. (2022). Semiotic diagnostics of the trajectory splitting between a dream of the past and dream of the future. Istoriya, 13:4(114). DOI: 10.18254/S207987840021199-7 (In Russian).
Panova, O. B. (2022). Chelovek epokhi smart-technologies: refleksiya nad antropologicheskoy problematikoy sovremennosti (filosofiya – nauka – iskusstvo) [The Human of the era of smart technologies: reflections on the anthropological problems of modernity (philosophy – science – art)]. In L. P. Kiyashchenko, & T. A. Sidorova (Eds.), Chelovek kak otkrytaya tselostnost’ [The Human as an open integrity]. Akademizdat.
Perrault, R. et al. (2019). Artifcial Intelligence Index Report. Stanford University.
Schultz, M. D., & Seele, P. (2022). Towards AI ethics’ institutionalization: knowledge bridges from business ethics to advance organizational AI ethics. AI Ethics, 3, 99–111.
Shtayn, О. А. (2010). Body transformation in modernity. Vestnik Udmurtskogo universiteta. Filosofiya. Psikhologiya. Pedagogika, 1, 99-102. (In Russian).
Skvortsov, D. E. (2015). Oppositions and metaphors of visual thinking in culture. Vestnik Volgogradskogo universiteta. Ser. Philosophy, 2(28), 95–99. (In Russian).
Smirnov, S. A. (2023). The temptation of not being or ontological roots of technological outsourcing. Chelovek, 34(1), 28–50. (In Russian).
Tsamados, A., et al. (2022). The ethics of algorithms: key problems and solutions. AI & Soc, 37, 215–230.
WCIOM, (2022). Artificial intelligence: a threat or a bright future? Analytical review. https://wciom.ru/analytical-reviews/analiticheskii-obzor/iskusstvennyi-intellekt-ugroza-ili-svetloe-budushchee (In Russian).
Yasin, M. I. (2022). Youth perceptions and attitudes about artificial intelligence. Izvestiya Saratovskogo universiteta. Novaya seriya. Seriya: Filosofiya. Psikhologiya. Pedagogika, 22(2), 197–201. (In Russian).
Yudin, B. G. (2011). The boundaries of a human being as a space of technological and logical influences. Voprosy sotsial’noy teorii, 5, 102–118. (In Russian).
Zykova, I. V. (2022). Transdistsiplinarnaya model’ razvitiya nauchnykh vozzreniy na prirodu lingvokreativnosti: filosofiya vs psikhologiya vs semiotika vs lingvistika [Transdisciplinary model of developing scientific views on the nature of linguistic creativity: philosophy vs psychology vs semiotics vs linguistics]. In L. P. Kiyashchenko, & T. A. Sidorova (Eds.), Chelovek kak otkrytaya tselostnost’ [The Human as an open integrity]. Akademizdat.
Issue: 1, 2024
Series of issue: Issue 1
Rubric: ARTICLES
Pages: 102 — 119
Downloads: 269