Artificial Intelligence on Guard Against the Harmful Effects of Deepfakes

I. Kyianytsia, D. Faуvishenko

Abstract


The purpose of the study is to outline the threats posed by modern technologies for creating deepfakes, to confirm the need for legal regulation of their spreading, and to make relatable proposals for recognizing deepfakes at the everyday level, in particular through the use of artificial intelligence.

Research methodology. The materials used to prepare this article were compiled using a combination of theoretical and empirical methods, including the analysis of sources that offer information about the role of deepfakes in the media environment and their impact on society as a whole. The analysis of foreign websites with legislative acts made it possible to systematize these sources on the relevant issues, as well as to strengthen the argument for the need to legally regulate the dissemination of such falsifications. The use of these methods, as well as the inductive generalization of the field under study, contributed to the structuring of the necessary material to obtain the relational basis for recognizing deep audiovisual counterfeits.

Results. A list of rules that can be used to recognize deepfakes is proposed, and a list of online resources for their detection is reviewed and systematized in order to increase the overall level of media literacy and awareness of threats that negatively affect the mental health of society.

Novelty. As a result of the analysis of the sources, as well as their systematization and generalization, recommendations for strengthening critical thinking among the population are proposed, and the need for visual training is emphasized in order not to be deceived by another example of a deep audiovisual fake.

Practical significance. The proposed rules can be used both for widespread use in society to develop critical thinking and for the development of a set of competencies and programmatic outcomes in media education disciplines.

Key words: deepfake, disinformation, manipulation, media addiction, media literacy, artificial intelligence.


References


Valorska, M. A. (2020). Dipfeik ta dezinformatsiia [Deepfake and disinformation] (V. Oliinik, Trans.). Kyiv: Akademiia ukrainskoi presy ; Tsentr vilnoi presy [in Ukrainian].

Dipfeiky ta ShI u peredvyborchii ahitatsii v Indii: eksperty poboiuiutsia, shcho tekhnolohii mo-zhut zminyty vybory v usomu sviti. Retrieved from https://zn.ua/ukr/TECHNOLOGIES/dipfejki-ta-ii-u-peredviborchij-ahitatsiji-v-indiji-eksperti-pobojujutsja-shcho-tekhnolohiji-mozhut-zminiti-vibori-v-usomu-sviti.html [in Ukrainian].

Podobnyi, O., & Slatvinska, V. (2022). Dipfeik v konteksti deklaratsii pro maibutnie internetu [Deepfake in the context of a declaration about the future of the internet.]. Yurydychnyi naukovyi elektronnyi zhurnal, 5, 594–596. Retrieved from http://lsej.org.ua/5_2022/142.pdf [in Ukrainian].

Iurtaieva, K. V. (2021). Kryminolohichnyi analiz vykorystannia tekhnolohii Deepfake: koly feik staie zlochynom [Criminological analysis of the use of Deepfake technology: When a fake becomes a crime]. Visnyk kryminolohichnoi asotsiatsii Ukrainy, 1 (24), 31–42 [in Ukrainian].

Bahar, M., & Sharmin, А. (2021). Deep insights of deepfake technology: A review. Retrieved from https://www.academia.edu/76656464 [in English].

Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war. The coming age of post-truth geopolitics. Retrieved from https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war [in English].

Cole, S. (2019). California’s deepfake law aims to ban the technology for election misinformation. Vice News. Retrieved from https://www.nytimes.com/2024/09/17/technology/california-deepfakes-law-social-media-newsom.html [in English].

Declaration for the Future of Internet. (2022). Retrieved from https://digital-strategy.ec.europa.eu/en/library/declaration-future-internet [in English].

Hine, E., & Floridi, L. (2022). New deepfake regulations in China are a tool for social stability, but at what cost. Nature Machine Intelligence [in English].

H.R.3230 – deep fakes accountability Act 116th Congress (2019–2020). Retrieved from https://www.congress.gov/bill/116th-congress/house-bill/3230/text [in English].

Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC. Retrieved from https://eur-lex.europa.eu/eli/reg/2022/2065/oj/eng [in English].

Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., & Ortega-Garcia, J. (2019). DeepFakes and beyond: A survey of face manipulation and fake detection. Retrieved from https://www.researchgate.net/publication/338355353_DeepFakes_and_Beyond_A_Survey_of_Face_Manipulation_and_Fake_Detection [in English].

Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9, 40–53. Retrieved from https://timreview.ca/sites/default/files/ article_PDF/TIMReview_ [in English].




DOI: http://dx.doi.org/10.32840/cpu2219-8741/2024.4(60).14

Refbacks

  • There are currently no refbacks.


Since 2013, all electronic versions of the journal are stored in the National Library of Ukraine named after VI Vernadsky of the National Academy of Sciences of Ukraine and presented on the portal in the information resource "Scientific Periodicals of Ukraine".

Indexing of the journal in scientometric databases:

The publication is indexed by Citefactor: 2019/2020: 4,54.

The journal is indexed by Google Scholar.

In 2020, the journal was included in the Index Copernicus.

The journal is indexed by Innospace Scientific Journal Impact Factor (SJIF): 2016: 5,899, 2017: 6,435, 2018: 7,037, 2019: 7,431

From 2020, the collection is indexed by ResearchBib.

Journal included in the PKP Index.