Abstracts
Résumé
La désinformation en ligne n’est pas en perte de vitesse. Le terme infocalypse a d’ailleurs été inventé par Schick (2020) pour décrire la gravité de la désinformation sous forme d’hypertrucages malveillants (deepfakes) qui obscurcissent ce qui était auparavant considéré comme authentique. Cet article de réflexion circonscrit la complexité du phénomène des deepfakes afin de mettre en relief le besoin urgent de mener des recherches qui interpellent directement les acteurs et les actrices du monde de l’éducation. Sous la lentille de l’éducation à la citoyenneté, nous argumentons que le personnel enseignant, les parents ainsi que les actrices et les acteurs éducatifs doivent accompagner les jeunes dans le développement de l’esprit critique, de l’agentivité dans le contexte numérique et du discernement à l’égard des hypertrucages malveillants afin qu’ils puissent se protéger et contrer ce phénomène. Pour répondre à la question « Pourquoi éduquer les jeunes aux hypertrucages malveillants? », nous avons commencé par circonscrire ce phénomène et ses dangers dans un contexte de désinformation. Dans un premier temps, cet article expose la façon dont les deepfakes ébranlent les bases perceptuelles liées à la connaissance humaine et discute de leur menace pour nos sociétés. Deuxièmement, il présente des pistes envisagées et attire l’attention sur l’urgence de sensibiliser les jeunes à ces dangers et de les impliquer pour contrer cette forme de désinformation. Finalement, l’article propose des pistes de recherche à explorer en citoyenneté numérique.
Abstract
Online disinformation is not on the decline. The term infocalypse was actually coined by Schick (2020) to describe the seriousness of disinformation in the form of malicious deepfakes, which obscure what was previously considered authentic. This reflective article explores the complexity of the deepfake phenomenon to emphasize the urgent need for research that directly addresses educators and decision-makers. Under the lens of citizenship education, we argue that teachers, parents and educational actors must support students in developing critical thinking, agency in the digital environment and the ability to recognize deepfakes so that they can protect themselves and counteract this phenomenon. To answer the question "Why educate young people on malicious deepfakes?” we started by exploring this phenomenon and its dangers in the context of disinformation. First, this article exposes how deepfakes destabilize the perceptual foundations of human knowledge and discusses their threat to our societies. Secondly, it presents avenues considered and draws attention to the urgency of alerting young people to these dangers and involving them in counteracting this form of disinformation. Lastly, the article offers avenues for research in digital citizenship.
Resumen
La desinformación en línea no ha disminuido. El término info-calipsis fue inventado por Shick (2020) para describir la gravedad de la desinformación bajo la forma de hiper-argucias malintencionadas (deepfakes) que oscurecen lo que antes era considerado como auténtico. La presente reflexión circunscribe la complejidad del fenómeno de deepfakes con el fin de subrayar la urgente necesidad de realizar investigaciones que interpelen directamente a los actores y actrices del mundo de la educación. Desde la óptica de la educación a la ciudadanía, proponemos que el personal docente, los padres de familia y los actores y actrices del magisterio deben acompañar a los jóvenes en el desarrollo de un espíritu crítico, de la agentividad en el contexto numérico y del discernimiento frente a los hiper-argucias malintencionadas para que pueden protegerse y hacer frente a este fenómeno. Para dar respuesta a la cuestión «¿Por qué educar a los jóvenes a los hiper-argucias malintencionadas?» comenzamos por circunscribir dicho fenómeno y sus peligros en un contexto de desinformación. Por principio, este artículo expone la manera en que los deepfakes socavan las bases perceptivas ligadas al conocimiento humano y examina los peligros que corren nuestras sociedades. Después, presenta las opciones exploradas y señala a la atención lo urgente que es sensibilizar a los jóvenes a estas amenazas e implicarlos para hacer frente a esta forma de desinformación. Finalmente, el artículo propone lineas de investigación por explorar en el terreno de la ciudadanía digital.
Appendices
Bibliographie
- Agence France-Presse. (2019, 6 septembre). Les géants de la technologie lancent le « deepfake challenge » pour contrer la désinformation. Radio-Canada. https://ici.radio-canada.ca/nouvelle/1288876/deepfake-challenge-lutte-facebook-desinformation
- Ajder, H., Patrini, G., Cavalli, F. et Cullen, L. (2019). The state of deepfakes: Landscape, threats, and impact. Deeptrace. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
- Andreadakis, Z. (2020). Deep fakes and intelligence in the digital landscape: Preliminary systematic review findings. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3516344
- Baines, D.L. et Elliott, R.J.R. (2020, 2 février). Defining misinformation, disinformation and malinformation: An urgent need for clarity during the COVID-19 infodemic. IDEAS. https://ideas.repec.org/p/bir/birmec/20-06.html
- Bandura, A. (2006). Toward a psychology of human agency. Perspectives on Psychological Science,1(2), 164-180. https://doi.org/10.1111/j.1745-6916.2006.00011.x
- Barma, S., Laferrière, T., Lemieux, B., Massé-Morneau, J. et Vincent, M.C. (2017). Early stages in building hybrid activity between school and work: The case of PénArt. Journal of Education and Work, 30(6), 669-687.
- Bellanger, A. (2021, 17 février). Comment la Finlande a-t-elle vaincu les « fake news »? France Inter. https://www.franceinter.fr/emissions/les-histoires-du-monde/les-histoires-du-monde-17-fevrier-2021
- Block, L.G. et Keller, P.A. (1997). Effects of self-efficacy and vividness on the persuasiveness of health communications. Journal of Consumer Psychology, 6(1), 31-54.
- Brin, C., Durand, F., Gramaccia, J. et Thiboutot, C. (2021, avril). Portrait d’une infodémie. Retour sur la première vague de COVID-19. Observatoire international sur les impacts sociétaux de l’IA et du numérique. https://www.docdroid.com/NkXvSWt/portrait-dune-infodemie-retour-sur-la-premiere-vague-de-la-covid-19-pdf
- Burgund, H. et Panetta, F. (2019). In Event of Moon Disaster - Home. https://moondisaster.org/
- Canadian Commission for UNESCO. (2018, 5 novembre). Fighting “fake news”: How youth are navigating modern misinformation and propaganda online. https://en.ccunesco.ca/blog/2018/11/fighting-fake-news
- Ceci, L. (2021, 14 septembre). Hours of video uploaded to YouTube every minute 2007–2020. Statista. https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/
- Charlet, K. et Citron, D. (2019, 5 septembre). Campaigns must prepare for deepfakes: This is what their plan should look like. Carnegie Endowment for International Peace. https://carnegieendowment.org/2019/09/05/campaigns-must-prepare-for-deepfakes-this-is-what-their-plan-should-look-like-pub-79792
- Charlton, E. (2019, 21 mai). How Finland is fighting fake news in the classroom. World Economic Forum. https://www.weforum.org/agenda/2019/05/how-finland-is-fighting-fake-news-in-the-classroom/
- Chesney, R. et Citron, D. (2018, 16 octobre). Disinformation on steroids: The treat of deep fakes. Council on Foreign Relations. https://perma.cc/V9VF-J4DF
- Chesney, R., Citron, D. et Farid, H. (2020, 11 mai). All’s clear for deepfakes: Think again. Lawfare. https://www.lawfareblog.com/alls-clear-deepfakes-think-again
- Citron, D. (2019, 11 septembre). How deepfakes undermine truth and threaten democracy [video]. Conférences TED. https://www.ted.com/talks/danielle_citron_how_deepfakes_undermine_truth_and_threaten_democracy
- CrashCourse, Mediawise et Stanford History Education Group. (2021). Navigating Digital Information. YouTube. https://www.youtube.com/playlist?list=PL8dPuuaLjXtN07XYqqWSKpPrtNDiCHTzU
- Davidson, A.-L., Naffi, N. et Raby, C. (2017). A PCP approach to conflict resolution in learning communities. Personal Construct Theory & Practice,14, 61-72. http://www.pcp-net.org/journal/pctp17/davidson17.pdf
- Defense Advanced Research Projects Agency. (2019, 3 septembre). Uncovering the who, why, and how behind manipulated media. DARPA. https://www.darpa.mil/news-events/2019-09-03a
- Ferrer, C. C., Dolhansky, B., Pflaum, B., Bitton, J., Pan, J., & Lu, J. (2020, 12 juin). Deepfake detection challenge results : An open initiative to advance AI. FacebookAI. https://ai.facebook.com/datasets/dfdc/?fbclid=IwAR3wzAS4I75X4dC9GWJ3p1HwXxvd982rYF7N3nWbtuVZLZ2TMtywzCiSn74
- Frau-Meigs, D. (2015). De l’identité à la présence en ligne : enjeux de translittératie et de socialisation [vidéo]. Sam Network. https://www.sam-network.org/video/conference-prof-frau-meigs-universitat-sorbonne-paris-fr
- Frenda, S.J., Knowles, E.D., Saletan, W. et Loftus, E.F. (2013). False memories of fabricated political events. Journal of Experimental Psychology,49(2), 280-286.
- Glick, J. (2020). Deepfakes 101. https://moondisaster.org/deepfakes-101
- Goldzweig, R. et Brady, M. (2020). Deepfakes: How prepared are we? Democracy Reporting International. https://democracy-reporting.org/uploads/publication/14512/document/2020-11-deepfakes-publication-no-2-6086d42418da4.pdf
- Gordon, M. (2018). Lying in politics: Fake news, alternative facts, and the challenges for deliberative civics education. Educational Theory,68(1), 49-64.
- Grabe, M.E. et Bucy, E.P. (2009). Image bite politics: News and the visual framing of elections. Oxford University Press.
- Graber, D.A. (1990). Seeing is remembering: How visuals contribute to learning from television news. Journal of Communication,40(3), 134-156.
- Güera, D. et Delp, E.J. (2018). Deepfake video detection using recurrent neural networks. IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS). https://engineering.purdue.edu/~dgueraco/content/deepfake.pdf
- HabiloMédias. (2021). Notre mission et notre philosophie. https://habilomedias.ca/qui-nous-sommes/notre-mission-et-notre-philosophie
- Hasan, H.R. et Salah, K. (2019). Combating deepfake videos using blockchain and smart contracts. IEEE Access,7, 41596-41606. https://doi.org/10.1109/ACCESS.2019.2905689
- Howell O’Neill, P. (2019, 19 septembre). The world’s top deepfake artist: ‘Wow, this is developing more rapidly than I thought.’. MIT Technology Review. https://www.technologyreview.com/s/614343/the-worlds-top-deepfake-artist-wow-this-is-developing-more-rapidly-than-i-thought/
- Hwang, Y., Ryu, J.Y. et Jeong, S.-H. (2021). Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking, 24(3), 188-193. https://doi.org/10.1089/cyber.2020.0174
- Jaiman, A. (2020, 27 août). Technical countermeasures to deepfakes. Towards Data Science. https://towardsdatascience.com/technical-countermeasures-to-deepfakes-564429a642d3
- Journalists for Human Rights. (2019, 27 septembre). Launching JHR’s program on "Fighting Disinformation through Strengthened Media and Citizen Preparedness in Canada." Newswire. https://www.newswire.ca/news-releases/launching-jhr-s-program-on-fighting-disinformation-through-strengthened-media-and-citizen-preparedness-in-canada--899686785.html
- Kahne, J., Hodgin, E. et Eidman-Aadahl, E. (2016). Redesigning civic education for the digital age: Participatory politics and the pursuit of democratic engagement. Theory & Research in Social Education, 44(1), 1-35. https://ypp.dmlcentral.net/sites/default/files/publications/Redesigning%20Civic%20Ed_Kahne%2C%20Hodgin%2C%20Eidmain-Aadahl.pdf
- Kearney, R. (2020). Postdigital visual literacy: A semiotic perspective. Auckland University of Technology.
- Kelly, G. (1955/1991). The psychology of personal constructs. W. W. Norton/Routledge.
- Knight, W. (2019, 19 août). The world’s top deepfake artist is wrestling with the monster he created. MIT Technology Review. https://www.technologyreview.com/s/614083/the-worlds-top-deepfake-artist-is-wrestling-with-the-monster-he-created/
- Lalonde, D. (2019, 11 septembre). The election’s on: Now Canadians should watch out for dumbfakes and deepfakes. The Conversation. https://theconversation.com/the-elections-on-now-canadians-should-watch-out-for-dumbfakes-and-deepfakes-122927
- Langlois, S., Proulx, S. et Sauvageau, F. (2020). La confiance envers les médias d’information et les médias sociaux au Québec. Centre d’études sur les médias. https ://www.cem.ulaval.ca/wp-content/uploads/2020/02/cem-confiance-langlois-proulx-sauvageau.pdf
- Lazard, A. et Atkinson L. (2015). Putting environmental infographics center stage: The role of visuals at the elaboration likelihood model’s critical point of persuasion. Science Communication,37(1), 6-33.
- Le Figaro et Agence France-Presse. (2021, 17 juin). Facebook dit progresser dans la détection des images manipulées ou « deepfake ». Le Figaro. https://www.lefigaro.fr/secteur/high-tech/facebook-dit-progresser-dans-la-detection-des-images-manipulees-ou-deepfake-20210617
- Lee, Y., Huang, K.T., Blom, R., Schriner, R. et Ciccarelli, C.A. (2021). To believe or not to believe: Framing analysis of content and audience response of top 10 deepfake videos on YouTube. Cyberpsychology, Behavior and Social Networking,24(3), 153-158. https://doi.org/10.1089/cyber.2020.0176
- Les décrypteurs. (2021). Comment combattre la désinformation. Radio-Canada. https://ici.radio-canada.ca/info/decrypteurs/robot-conversationnel-combattre-desinformation/initiation-hypertrucages/#top
- Liv, N. et Greenbaum, D. (2020). Deep fake and memory malleability: False memories in the service of fake news. AJOB Neuroscience,11(2), 96-104.
- LSE Commission on Truth Trust and Technology. (2019). Tackling the information crisis: A policy framework for media system resilience. http://www.lse.ac.uk/media-and-communications/assets/documents/research/T3-Report-Tackling-the-Information-Crisis-v6.pdf
- Manke, K. (2019, 18 juin). Researchers use facial quirks to unmask ‘deepfakes.’ Berkley News. https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/
- Maras, M.H. et Alexandrou, A. (2019). Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. The International Journal of Evidence & Proof, 23(3), 255-262.
- Matern, F., Riess, C. et Stamminger, M. (2019). Exploiting visual artifacts to expose deepfakes and face manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). https://ieeexplore.ieee.org/abstract/document/8638330/metrics#metrics
- Messaris, P. (1997). Visual persuasion: The role of images in advertising. Sage.
- Ministère de l’Éducation et de l’Enseignement supérieur. (2019). Cadre de référence de la compétence numérique. http://www.education.gouv.qc.ca/fileadmin/site_web/documents/ministere/Cadre-reference-competence-num.pdf
- Mitchell, A., Simmons, K., Matsa, K.E. et Silver, L. (2018, 9 janvier). Young people much more likely than older to get news daily via social media. Pew Research Center. https://www.pewresearch.org/global/2018/01/11/people-in-poorer-countries-just-as-likely-to-use-social-media-for-news-as-those-in-wealthier-countries/pg_2018-01-11_global-media-habits_3-02/
- Moukheiber, A. (2020). Votre cerveau vous joue des tours. J’ai Lu.
- Naffi, N. (2018). Learning about oneself: An essential process to confront social media propaganda against the resettlement of syrian refugees [Thèse de doctorat, Université Concordia]. Spectrum. https://spectrum.library.concordia.ca/983399/
- Naffi, N. et Davidson, A.-L. (2016). Examining the integration and inclusion of Syrian refugees through the lens of personal construct psychology. Personal Construct Theory & Practice,13, 200-209. https://tinyurl.com/5vr6wdyt
- Naffi, N. et Davidson, A.-L. (2017). Engaging host society youth in exploring how they construe the influence of social media on the resettlement of Syrian refugees. Personal Construct Theory & Practice,14, 116-128. https://tinyurl.com/7zrapavz
- Naffi, N., Davidson, A.-L. et Berger, F. (2021, 20 avril). Infocalypse : la propagation des hypertrucages menace la société. The Conversation. https://theconversation.com/infocalypse-la-propagation-des-hypertrucages-menace-la-societe-158335
- News Literacy Project. (2021a). InfoZones. https://get.checkology.org/lesson/infozones/
- News Literacy Project. (2021b). What is Checkology?https://get.checkology.org/what-is-checkology/
- O’Brien, M. (2019, 12 juin). Hany Farid on the threat of deepfakes during the 2020 election. I School Berkley. https://www.ischool.berkeley.edu/news/2019/hany-farid-threat-deepfakes-during-2020-election
- Office québécois de la langue française. (2019a). Hypertrucage. Dans Grand dictionnaire terminologique. http://gdt.oqlf.gouv.qc.ca/ficheOqlf.aspx?Id_Fiche=26552557
- Office québécois de la langue française. (2019b). Mésinformation. Dans Grand dictionnaire terminologique. http://gdt.oqlf.gouv.qc.ca/ficheOqlf.aspx?Id_Fiche=26556735
- Oxford Reference. (2021). Ocularcentrism. https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100245338.
- Paris, B. et Donovan, J. (2019). Deepfakes and cheap fakes: The manipulation of audio and visual evidence. Data & Society. https://datasociety.net/wp-content/uploads/2019/09/DS_Deepfakes_Cheap_FakesFinal-1-1.pdf
- Pérez-Escoda, A., Pedrero-Esteban, L.M., Rubio-Romero, J. et Jiménez-Narros, C. (2021). Fake news reaching young people on social networks: Distrust challenging media literacy. Publications, 9(2), 24. https://doi.org/10.3390/publications9020024
- Prensky, M. (2001). Digital natives, digital immigrants. Part I. On the Horizon, 9(5), 1-6.
- Prior, M. (2013). Visual political knowledge: A different road to competence? Journal of Politics, 76(1), 41-57.
- Purnell, C. (2020). Solutions deepfakes. Psychology Today, 30-31.
- Queen’s Printer for Ontario. (2020). Digital literacy. https://www.dcp.edu.gov.on.ca/en/program-planning/transferable-skills/digital-literacy
- Reuters. (2021). Identifier et lutter contre les contenus médiatiques manipulés. https://www.reuters.com/manipulatedmedia/fr/
- Schick, N. (2020). Deepfakes: The coming infocalypse. Grand Central Publishing.
- Schroepfer, M. (2019, 5 septembre). Creating a data set and a challenge for deepfakes [page Facebook]. https://ai.facebook.com/blog/deepfake-detection-challenge/
- Slate. (2021, 14 mars). Une mère utilise le deepfake pour discréditer des pom-pom girls rivales de sa fille. Slate. http://www.slate.fr/story/205547/une-mere-utilise-le-deepfake-pour-discrediter-des-pom-pom-girls-rivales-de-sa-fille
- Société internationale pour la technologie dans l’éducation. (2018, 11 octobre). Rethinking digital citizenship. Youtube. https://www.youtube.com/watch?v=iwKTYHBG5kk
- Sonnemaker, T. (2021, 13 avril). "Liar’s dividend": The more we learn about deepfakes, the more dangerous they become. Business Insider. https://www.businessinsider.com/deepfakes-liars-dividend-explained-future-misinformation-social-media-fake-news-2021-4
- Stanford History Education Group. (s. d.). Curriculum. Civic Online Reasoning. https://cor.stanford.edu/curriculum/
- Stanford History Education Group. (2016). Evaluating information: The cornerstone of civic online reasoning. https://sheg.stanford.edu/upload/V3LessonPlans/Executive%20Summary%2011.21.16.pdf
- Statista. (2021a, 5 mai). YouTube user reach among internet users in Canada as of April 2020, by age group.https://www.statista.com/statistics/484416/canada-youtube-penetration-by-age/
- Statista. (2021b, 19 avril). Number of TikTok users in the United States from 2019 to 2024.https://www.statista.com/statistics/1100836/number-of-us-tiktok-users/
- Statistique Canada. (2018, 7 février). A portrait of Canadian youth. https://www150.statcan.gc.ca/n1/pub/11-631-x/11-631-x2018001-eng.htm
- Stenberg, S. (2006). Conceptual and perceptual factors in the picture superiority effect. European Journal of Cognitive Psychology,18(6), 813-847.
- Strupp, C. (2019, 30 août). Fraudsters used AI to mimic CEO’s voice in unusual cybercrime case. The Wall Street Journal. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
- Sundar, S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. Dans M. Metzger et A. Flanagin (dir.), Digital media, youth, and credibility (p. 73-100). MIT Press.
- TAP Staff Blogger. (2019, 12 septembre). Danielle Citron discusses how deepfakes undermine truth and threaten democracy. Technology, Academics, Policy (TAP).https://www.techpolicy.com/Blog/September-2019/Danielle-Citron-Discusses-How-Deepfakes-Undermine.aspx
- The Dalí Museum. (2019, 8 mai). Behind the scenes: Dali Lives [vidéo]. YouTube. https://www.youtube.com/watch?v=BIDaxl4xqJ4
- Thom, J. (2016). Believing the news: Exploring how young Canadians make decisions about their news consumption [Thèse non publiée, University of Western Ontario]. Electronic Thesis and Dissertation Repository. http://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=5898&context=etd
- Turan, S.G. (2021). Deepfake and digital citizenship: A long-term protection method for children and youth. Dans R.J. Blankenship (dir.), Deep fakes, fake news, and misinformation in online teaching and learning technologies (p. 124-142). IGI Global. https://doi.org/10.4018/978-1-7998-6474-5.ch006
- Vaccari, C. et Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society,6(1), 1-13.
- Valtonen, T., Tedre, M., Mäkitalo, K. et Vartiainen, H. (2019). Media literacy education in the age of machine learning. Journal of Media Literacy Education,11(2), 20-36.
- Vicario, M.D., Quattrociocchi, W., Scala, A. et Zollo, F. (2019). Polarization and fake news: Early warning of potential misinformation targets. ACM Journals, 13(2), 1-22.
- Villasenor, J. (2019, 14 février). Artificial intelligence, deepfakes, and the uncertain future of truth. Brookings. https://www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/
- Wahl-Jorgensen, K. et Carlson, M. (2021). Conjecturing fearful futures: Journalistic discourses on deepfakes. Journalism Practice, 15(6), 803-820. https://www.tandfonline.com/doi/full/10.1080/17512786.2021.1908838
- Weghe, T.V. (2019, 29 mai). Six lessons form my deepfakes research at Stanford. Medium. https://medium.com/jsk-class-of-2019/six-lessons-from-my-deepfake-research-at-stanford-1666594a8e50
- Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39-52.
- Witten, I.B. et Knudsen, E.I. (2005). Why seeing is believing: Merging auditory and visual worlds. Neuron, 48(3), 489-496.
- ZDNet. (2021, 18 juin). Vidéo : dans la lutte contre les deepfakes, Facebook a une solution à base d’IA. ZDNet. https://www.zdnet.fr/actualites/video-lutte-contre-les-deepfake-facebook-a-une-solution-a-base-d-ia-39924757.htm