Towards a Typology of Intentionally Inaccurate Representations of Reality in Media Content
Abstract
In this paper, we take a look at three concepts frequently discussed in relation to the spread of misinformation and propaganda online; fake news, deepfakes and cheapfakes. We have mainly two problems with how these three phenomena are conceptualized. First of all, while they are often discussed in relation to each other, it is often not clear what these concepts are examples of. It is sometimes argued that all of them are examples of misleading content online. This is quite a one-sided picture, as it excludes the vast amount of content online, namely when these techniques are used for memes, satire and parody, which is part of the foundation of today’s online culture. Second of all, because of this conceptual confusion, much research and practice is focusing on how to prevent and detect audiovisual media content that has been tampered with, either manually or through the use of AI. This has recently led to a ban on deepfaked content on Facebook. However, we argue that this does not address problems related to the spread of misinformation. Instead of targeting the source of the problem, such initiatives merely target one of its symptoms. The main contribution of this paper is a typology of what we term Intentionally Inaccurate Representations of Reality (IIRR) in media content. In contrast to deepfakes, cheapfakes and fake news – terms with mainly negative connotations – this term emphasizes both sides; the creative and fun, and the malicious use of AI and non-AI powered editing techniques.
Domains
Computer Science [cs]Origin | Files produced by the author(s) |
---|