Next Article in Journal
MRI-Based Brain Tumor Classification Using a Dilated Parallel Deep Convolutional Neural Network
Previous Article in Journal / Special Issue
Assistive Technology for Higher Education Students with Disabilities: A Qualitative Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithm Literacy as a Subset of Media and Information Literacy: Competences and Design Considerations

by
Divina Frau-Meigs
Department of Humanities, Sorbonne Nouvelle University, Unesco Chair Savoir Devenir, 17 rue de la Sorbonne, 75012 Paris, France
Digital 2024, 4(2), 512-528; https://doi.org/10.3390/digital4020026
Submission received: 13 March 2024 / Revised: 28 May 2024 / Accepted: 4 June 2024 / Published: 6 June 2024
(This article belongs to the Special Issue Digital in 2024)

Abstract

:
Algorithms, indispensable to understand Artificial Intelligence (AI), are omnipresent in social media, but users’ understanding of these computational processes and the way they impact their consumption of information is often limited. There is a need for Media and Information Literacy (MIL) research investigating (a) how MIL can support algorithm literacy (AL) as a subset of competences and with what working definition, (b) what competences users need in order to evaluate algorithms critically and interact with them effectively, and (c) how to design learner-centred interventions that foster increased user understanding of algorithms and better response to disinformation spread by such processes. Based on Crossover project research, this paper looks at four scenarios used by journalists, developers and MIL experts that mirror users’ daily interactions with social media. The results suggest several steps towards integrating AL within MIL goals, while providing a concrete definition of algorithm literacy that is experience-based. The competences and design considerations are organised in a conceptual framework thematically derived from the experimentation. This contribution can support AI developers and MIL educators in their co-design of algorithm-literacy interventions and guide future research on AL as part of a set of nested AI literacies within MIL.

1. Introduction

Algorithms, finite sequences of instructions fed to data flows, tend to organise content in order to rank (Google PageRank); to recommend (Facebook Newsfeed, Twitter feeds); to predict (Google auto-complete); and, increasingly, to generate information (via AI Generative systems with large language models trained chatbots like ChatGPT-4 or DALL-E). The providers and platforms implement them to tailor content (news, search, advertising, etc.) to their users’ habits, based on their individual and aggregated behavioural data. In doing so, they maximise traffic via engagement and generate revenue and benefits [1]. So, users are increasingly perceiving the world online and offline via decisions made by algorithms, be it in their purchase of products, their search for friends and romance, or their consumption of news. But they are not fully aware that such decisions may inhibit their own agency [2,3].
This structural knowledge gap points to the need for practical solutions, to step up the challenge of algorithm opacity by looking at the end-user (not the producer) and empowering citizens to analyse algorithms critically and creatively, in the hope of bringing insights in their own information consumption [4]. It calls for a bottom-up approach, with more Media and Information Literacy (MIL) strategies adjusted to algorithmic savviness, to make citizens more competent in their uses of online social media and their capacity to fight disinformation [5]. Scaling up citizens’ agency is key to democratic societies’ capacities to harness the benefits of algorithms, useful as they may be, and downsize their negative effects on information quality, disinformation spread and platform transparency.
Shifting the focus to the end-user, the Crossover project [6], supported by EU funding, brought together developers, fact-checkers, journalists and experts in MIL (crossover.social). It aimed at creating a dialogue between these different actors, from the design of the fact-checking tool to the larger use of it by other professionals such as teachers and educators. Accordingly, the focus on MIL was conducted with an empirical research design to determine scenarios of use by journalists and fact-checkers from which to derive competences and design orientations for educators (and journalists who intervene in schools). The research investigated (a) how MIL can support algorithm literacy (AL) as a subset of competences, and with what working definition; (b) what competences users need in order to evaluate algorithms critically and interact with them effectively; and (c) how to design learner-centred interventions that foster increased user understanding of algorithms and better response to disinformation spread by such processes.

Algorithm Literacy: A Dimension of MIL Still in Its Infancy

When considering if and how AL can fit within a MIL framework, existing definitions need to be assessed for how they deal with the transdisciplinary dimension of algorithms, beyond maths and computing, towards social sciences, to understand how they affect and impact users’ decision-making processes and overall agency [7]. This perspective places the emphasis on societal issues, such as the attention economy, information quality, information disorders and biases, and eventually the ethics of it all.
Some definitions come from the field of computing and data management. Computation studies emphasise the user’s ability “to apply strategies that allow them to modify predefined settings in algorithmically curated environments, such as in their social media newsfeeds or search engines, to change algorithms’ outputs, compare the results of different algorithmic decisions, and protect their privacy” [8]. This definition is close to critical data literacy [9] and is more akin to privacy and consumer protection [10,11]. The definitions that emerge from the Artificial Intelligence field are derivative. There, AL consists of the ability “to organise and apply algorithmic curation, control and active practices relevant when managing one’s AI environment” [12]. This definition is attached to management and control.
Other definitions are closer to the field of Media and Information Literacy. One trend emanates from information literacy and library sciences. “Algorithmic literacy—a subset of information literacy, is a critical awareness of what algorithms are, how they interact with human behavioural data in information systems, and an understanding of the social and ethical issues related to their use” [13]. Another definition, closer to media studies, considers algo-literacy as “the combination of users’ awareness, knowledge, imaginaries, and tactics around algorithms” [14]. This definition is the most user-centric and refers to the experiences of users with algorithms, including the representations in the users’ minds [15].
These definitions tend to be stemming from research on young people and their competences in the face of algorithms. Interviews of young people who played a game prototype designed by the Canadian MediaSmarts’ education team showed that “while youth understand and appreciate the benefits of recommendation algorithms, they are troubled by algorithmic data collection and data sharing practices” [16]. Another study conducted in the Netherlands interviewed young people and showed that they were unaware of the curation and personalization operated by algorithms on their social media uses or—if aware—did not know what to do about it [14].
Among other researchers, Dogruel, Masur and Joeckel [8] have conducted tests on a competence-based approach, trying to answer the issue of verbalising and evaluating AL. They opted for two cognitive dimensions of algorithm literacy: awareness of algorithm uses and knowledge about algorithms. They found that “the two scales correlated positively with participants’ subjective coding skills and proved to be an appropriate predictor for participants’ handling of algorithmic curation in three test-scenarios”.
Very little research focuses on teachers and their perception of AL. It confirms their urgent need for training and points to major gaps among the teaching body. Educators are very reluctant to include sessions on AL in their courses because they lack knowledge and confidence on the topic, because it is not present in curricular design, and because they lack teaching guidelines and support from their hierarchy [17,18]. Researchers call for more algorithmic literacy tools and resources to help youth acquire the knowledge they need to protect themselves and their information in digital spaces. Some alert to three methodological challenges for algorithm literacy research: “first, the lack of an established baseline about how algorithms operate; second, the opacity of algorithms within everyday media use; and third, limitations in technological vocabularies that hinder young people in articulating their algorithmic encounters” [14].
The examination of the research on definitions and their implementation points to the fact that research and education on AL is still in its infancy, with knowledge gaps [19] without a consistent set of competences that deal with skills, knowledge, attitudes and values. It also confirms that AL can be part of MIL, to inform users in their non-technical daily interactions with social media as they affect information consumption and circulation. To address the lack of curricular design, MIL can use the familiarity principle, with tried and true methods to make it less daunting for educators and learners to tackle algorithms since using MIL strategies does not require as huge an effort in training and upskilling as starting from scratch or from STEM—where AL is sometimes part of computing and mathematics [20].
Accordingly, the working definition adopted for the Crossover project was derived from the key MIL elements of the review of literature. It posited that AL recombined (1) the users’ awareness and knowledge of representations and tactics around algorithms with (2) the users’ explicit and implicit actions to curate content with algorithms and adjust their browsing behaviour and ethics. This two-tiered user-centric definition encompassed algorithmic functions and the cultural practices and imaginaries around them. Tying it to actual real-life scenarios of use was crucial to identify sense-making practices that incorporated key algorithmic concepts such as ranking, recommending and predicting as well as issues of filtering, curating and attention engineering. The focus was on MIL practitioners (educators and journalists) as being the most likely to apply AL in their interventions and the neediest in terms of competence frameworks and design guidance.

2. Methodology

To bring together users’ algorithmic awareness and users’ curating behaviour, the MIL theory used was transliteracy, as it considers the multi-level understanding of information as news, documents and data [21]. Transliteracy theory takes into account “(1) the multi-media dimensions of current literacy—being able to read, write, count and compute with print and digital tools and via all sorts of formats from book to blog; (2) the trans-domain requirements for digitally sustainable literacy—being able to code and to search, test, validate, modify information as understood in computation (data), in communication (news) and in library science (documents)” [15,21,22].

2.1. Crossover: An Innovative Research–Action Design

Based on transliteracy theory, the Crossover project looked at both the multi-media dimension (across social media channels, mass media and print) and the trans-domain dimension (from data to news to documents). It was deployed following three major steps: real-life inquiries conducted by two fact-checking entities, Checkfirst and Disinfolab.eu; news stories derived from the inquiries and printed by one newspaper, Apache.be; and podcasts debriefing the stories and elaborating on the required MIL competences by all experts, so as to produce a MIL “Algo-literacy prebunking kit” at the end of the project.
It involved the appropriation by developers, fact-checkers, journalists and MIL experts of a smart innovative tool, the Dashboard, and its attendant “user-meters” since an autonomous system of mini-computers was placed in the homes of nine users (dispersed over Belgium) to simulate their online behaviour.
The Dashboard was designed to understand the functioning of algorithms, from a user perspective. Hence, the double approach conducted by Checkfirst was as follows: using the APIs that were made available by the platforms to query and monitor different search engines and social media (Google, YouTube, Facebook, Twitter, Odysee, Reddit and Mastodon) and placing an autonomous robot that made requests on the platforms from the homes of various users (see Figure 1). This double approach made it possible to be independent from the platforms to provide data and also to compare the data they officially provided with the concrete experience of the users. Thus, the influence of algorithms on users’ consumption of news was made apparent, especially as specific topics were investigated by journalists trained to use the Dashboard.
The Crossover Dashboard tool (1)afforded measures in real time about the influence of algorithms on social media and search engines, (2) detected potential disinformation campaigns, (3) was used for online and field investigations, and (4) took into account users with a unique system of at-home monitoring (see Figure 2).

2.2. Data Collection

For the data collection, the method adopted was by scenario of use [23]. Scenarios of use are especially efficient in an empirical and experimental approach because they enable descriptions of users’ interactions with a system (here the online platforms) while achieving a goal (here algorithmic trends in news) under specified conditions (the dashboard and its constraints and affordances). They thus provide insights into their process and yield information about the context in which the system operates, from the perspective of the user, in a task-oriented way as close to their practice as possible [24].
The four scenarios of use finally chosen followed a strict timeline: (1) they were inferred from discussions of MIL experts with journalists and fact-checkers and were adopted for their capacity to also respond to the experiences of other professionals (outside the field of journalism) like teachers and educators. (2) They were verified by four inquests and attendant stories that were dependent on the news at the time of the project and the disinformation it elicited (2022, within the context of the beginning of the war in Ukraine), (3) They were “debriefed” in four podcasts that involved all actors (developers, fact- checkers, journalists and MIL experts), with reference to the newspaper story actually published by Apache.be. (4) All four scenarios of use were gathered in a comprehensive MIL tool, the MIL “Algo-literacy prebunking kit” to tackle AL, built on this empirical approach.
This method made it possible to mimic the two levels of interaction with algorithms: what the platforms provide the users and the concrete experience of the users. It also allowed all participants in the project to gather evidence of algorithmic activity (almost as forensics) and then use it to make real-life investigations. The fact-checking and journalistic activity was then processed to fit a needs-based approach for teachers and educators, as well as journalists who have been increasingly asked to intervene in the classrooms or to train their colleagues in MIL.
The podcast editorial line for the four inquests followed a similar pattern, to drive the sense of the digital factory of evidence-based actions of algorithms on the access, search, and curation of information and, significantly, disinformation. The editorial line was organized along six recognizable and repeated steps across the four podcasts:
(1)
The initial inquest: the online “trending” information of the moment;
(2)
The Apache.be investigation and news story;
(3)
A zoom on a search scenario of use with the Dashboard;
(4)
An explanation of a key algorithmic notion;
(5)
The resolution: the disinformation risk avoided/dealt with;
(6)
The MIL solutions and competences called for.
The podcasts reflected a mix of competences and experiences, to mirror the activities of the developers, jointly with the journalists and MIL experts as they explored the scenarios of use. They tapped directly on the participants’ everyday experiences with algorithms, building on their shock and surprise with faulty results and predictions. The podcast format was chosen to convey that direct and authentic feeling to the audience (as they can be listened to autonomously, without the whole MIL kit). They reproduced the thought processes of the various actors involved and they gave them a voice. They were a way of eliciting conversations and insights on how algorithms work behind the scenes, without accepting the “black box” metaphor [10,25,26]. The themes that emerged (war, disinformation, etc.) were likely to act at two levels: arousing curiosity (or shock) about algorithms and their hidden role, and motivating users to change behaviour and take action. From these authentic experiences, awareness of mechanisms at work and competences, interactive quizzes were derived to build a knowledge base, and modular pedagogical pathways were suggested for the MIL educators.

2.3. MIL Principles

This method was construed in accordance with MIL pedagogical design principles [27,28]. The general objective was to facilitate the appropriation of a holistic pedagogical strategy by practitioners (be it educators or journalists intervening in classrooms). This implied adopting a number of MIL tenets:
  • A modular approach (stories, podcasts, quizzes) to allow for a variety of entries for practitioners and educators;
  • Authentic documents and examples to remain as close as possible to users’ experiences and societal issues;
  • A competence-based framework with verbalised notions and actions to stimulate critical thinking and foster civic actions;
  • A multi-stakeholder strategy that shows the perspectives of the different actors involved in understanding algorithms and in the co-design of MIL interventions (developers, journalists, experts).
These principles then guided the elaboration of the final “Algo-literacy prebunking kit”. The toolkit thus addressed the three main points of the Crossover project, regarding its MIL dimension: (a) showing practitioners how MIL can support algorithm literacy (AL), (b) spelling out the competences users need in order to evaluate algorithms critically and interact with them effectively, and (c) providing an accompanying document (with a series of quizzes based on the podcasts) for practitioners to design learner-centred interventions in the shape of modular workshops [29].

3. Results

3.1. The MIL Algorithm-Literacy Matrix of Scenarios of Use

Overall, the scenarios of use, as reflected in the podcasts, mimicked four major information search strategies: by notional keywords, by communities of affinity, by influencer accounts, and by tool affordances, since the Dashboard became more and more agile as its database increased (see Table 1: “matrix of scenarios of use”). This implied being able to navigate across social media, mass media and print channels and to validate and modify information across domains (from data to news to documents), as suggested by transliteracy theory. It was thus possible to develop a trajectory for users, from online source to data traces to evidence-building in real-life circumstances.
In the process, the scenarios of use provided insights on three major roles of algorithms (ranking, recommending and predicting) in a task-oriented way. This did not so much increase the transparency of algorithms than transparency in their uses, eliciting the notion that, if they cannot be modified, the users can nonetheless modify the way they interact or “ride” with them and their results. The initial focus on information (rather than disinformation) was equally rewarding, as it became apparent that the point was not to stop algorithms but to stop the amplification of disinformation, thus raising ethical issues among the users.
The revelations of the inquests showed the actual workings of the algorithms, at the users’ end (not the API end of the platforms) and as a consequence, the competences mobilised, and the societal issues addressed. The first two podcasts laid the stress on the investigative and search dimensions of the strategies, while the last two podcasts also added a reflexive dimension, as journalists, fact-checkers, developers and MIL experts objectified their practices.
  • Scenario 1, “the keyboard fighters”, showed the mismatch between the online calls for action and real-life mobilisations as the “liberty convoy” threats, which seemed threatening online, turned out to be insubstantial in real life. The role of algorithmic ranking was thus debunked in relation to user search. The MIL lesson drawn was that online disinformation did not always work and could be disproved by facts (see podcast 1).
  • Scenario 2, “algorithms and propaganda: dangerous liaisons”, revealed how algorithms tended to promote state propaganda: as Russia Today was banned by the European decision (due to the war in Ukraine), algorithms recommended a new state-controlled media, CGTN, the state channel of the Chinese Communist Party, that relayed Russian propaganda. The role of algorithmic recommendation was thus exposed in relation to user engagement. The MIL lesson drawn was that disinformation was amplified along polarised lines and across borders (see podcast 2).
  • Scenario 3, “how algorithms changed my life”, unveiled how conspiracy theories circulated on influential accounts, in “censorship free” and unmoderated networks like Odysee. It followed an influencer, the extreme-right political personality Dries Van Langenhove, who called for racism, violence and anti-COVID stances. The role of algorithmic recommendation was thus unveiled in relation to user echo chambers. The MIL lesson drawn was that information diversity was key to avoid being caught in the rabbit holes of the attention economy (see Podcast 3).
  • Scenario 4, “the algorithm watchers”, demonstrated how Google auto-complete systematically offered users the Donbass Insider recommendation when they typed Donbass in their search bar, across all people user-meters. Donbass Insider relayed Russian false messages about the war in Ukraine and was linked to Christelle Néant, a Franco-Russian pro-Kremlin blogger and self-styled journalist. The role of algorithmic prediction was revealed in relation to user interactions with the tool affordances. The MIL lesson drawn was that queries and prompts can lead to automated bias and human manipulation (see podcast 4).
The four scenarios were summarized in Table 1, providing the “matrix of scenarios of use”, and served as the main organization of the prebunking kit.
The scenarios of use method confirmed its efficiency in describing user interactions with the various social media and online platforms and in unveiling the role of algorithms in their interplay with information and user engagement. They provided insights on the workings of such systems, yielding some surprises and undermining some “faulty” early hypotheses and predictions, as the developers, fact-checkers and journalists followed through with their real-life inquests. This task-oriented perspective, close to their everyday practice, was further elicited in the conversations held in the podcasts with the MIL experts, especially the last two that were focused on how algorithms changed their working strategies.
The scenarios of use also indicated a shift in the modes of conducting information search, particularly in relation to sources and evidence-building. The users are no longer dealing with secret or opaque sources but with contingent, voluminous amounts of data that require interpretation, with the help of specific tools and with an awareness of how algorithms work. This shift was made visible by the journalists involved in project Crossover, who equated it to a form of “forensics”, that required a different way of conceptualising inquiry (podcast 4). They saw a positive use of algorithms as an “early signal” of phenomena that might develop and that are worth monitoring and pursuing (podcast 3). They described a kind of algo-journalism, focused on demand, riding the algorithms with a two-step process: online trend detection followed by selection of topics that are worth delving into. This algo-journalism “includes sorting information, reviewing it, presenting it visually or even using robots to write articles … And we almost systematically use algorithms with artificial intelligence to process all that” (podcast 3).
To a larger extent, the scenarios of use also made visible the engineering of attention, via algorithms. The topics that were chosen for inquiry, pushed by algorithms, revealed how much this attention is based on emotions, especially fear, that generate traffic, even if this traffic is based on propaganda, bias or manipulation (podcast 2). The intricate patterns between engagement and recommendation are particularly telling about how participation, presented as a positive attitude online, can be weaponized to bend offline attitudes (podcast 2), though not always meeting with success (podcast 1).
Finally, the scenarios of use also pointed to the possibility of new mediations: journalists, developers, and MIL experts came together in engaging, collaborative work. The Dashboard was improved in an agile method as the various inquests led to new strategies, akin to prebunking, befitting their fact-checking mission (podcasts 3 and 4). The Dashboard introduced a tooled mediation as well, that could offer a counterbalance to the algorithmic mediation as captured by the major online platforms (Google and Meta in particular).

3.2. MIL Algo-Literacy Meta-Competence Framework

The scenarios of use enabled the MIL experts to derive a number of valuable “lessons learnt”. They made it possible to understand how the actions online (such as queries and prompts) were algorithmically conditioned to shape access to information and individualisation of results and outcomes (as verified by the user-meters vs. the APIs analysis). They made it possible to combine awareness of processes and knowledge about functions, in particular ranking, recommending and predicting. They could thus derive the competences required for users to deal with algorithms in their daily practices.
More importantly, some meta-competences appeared together with specific micro-competences. They could point to strategies and solutions at the individual and collective level. The interest of considering developments in journalism (media), with the description of platform algorithmic applications (data) in order to consider the results yielded (documents) confirmed the usefulness of transliteracy theory for embedding algorithm literacy in Media and Information Literacy (see last section in podcasts 1, 2, 3 and 4).
For media, the meta competence was related to the understanding of the context of production and distribution of algorithms and the cultural and societal implications. The ensuing micro-competences were distributed along areas related to knowledge, skills, attitudes and values:
  • Know the new context of news production and amplification via algorithms;
  • Pay attention to emotions and how they are stirred by sensationalist contents and take a step back from “hot” news;
  • Be suspicious and aware of “weak signals” for disinformation (lack of traffic on some accounts, except for some divisive topics; very little activity among and across followers on a so-called popular website or community, etc.);
  • Fight confirmation biases and other cognitive biases.
For documents, the meta competence was related to the mastery of information search and platform navigation, in particular the controlled and diversified use of sources as pushed by algorithms. The ensuing micro-competences were distributed along areas related to knowledge, skills, attitudes and values:
  • Vary sources of information;
  • Be vigilant about divisive issues where opinions prevail and facts and sources are not presented;
  • Modify social media uses to avoid filter bubbles and (unsolicited) echo chambers;
  • Set limits to tracking so as to reduce targeting (as fewer data are collected from your devices);
  • Deactivate some functionalities regularly and set the parameters of your accounts;
  • Browse anonymously (use VPNs).
For data, the meta competence was related to the control or oversight of algorithmic patterns, in particular for the sake of transparency and accountability. The ensuing micro-competences were distributed along areas related to knowledge, skills, attitudes and values:
  • Decipher algorithms, their biases and platform responsibility;
  • “Ride” algorithms for specific purposes;
  • Pay attention to RGPD and platform loyalty to data protection;
  • Mobilise for more transparency and accountability about their impact;
  • Require social networks to delete fake news accounts, to ban toxic personalities and to moderate content;
  • Encourage the creation of information verification sites and use them;
  • Use technical fact-checking tools like the Dashboard or InVID-Weverify;
  • Signal or report to platforms or web managers if misuses are detected;
  • Comment and/or rectify “fake news”, whenever possible;
  • Alert fact-checkers, journalists or the community of affinity.
The MIL experts deemed it important to emphasise user agency and reactivity by adding explicit and implicit actions to curate algorithms and adjust browsing behaviour, as evidenced in the Crossover project. They were intent on elucidating the mechanics of algorithms as well as the processes at stake to make it possible to prevent algorithmic risks as well as empower users to ride algorithms for their own information consumption.
The results made it possible to encapsulate the major dimensions for building an algorithm-literacy meta-competence framework aligned with the existing MIL competences framework that comingle knowledge, skills, attitudes and values (Figure 3). The AL framework underlines the attention paid to context, content and user critical thinking as well as the inter-related roles of media, data and documents to ensure oversight of algorithmic patterns and mastery over search and sources. The framework also encourages users to exert their agency in dealing with algorithms, especially by communicating with others and mobilizing for augmented transparency and accountability of platforms.

3.3. The Knowledge Base with Pedagogical Pathways and Design Considerations

The meta-competences domains and attendant micro-competences were picked up in the interactive quizzes and their accompanying documents. The four interactive quizzes offered many options like “drag and drop”, “fill in the blanks”, etc. They could be played as standalone (by youth and adults) or associated with the podcasts (see Table 2 and Table 3).
Quiz 1 was derived from the scenario of use 1 and podcast 1.
Apart from understanding how algorithms work, understanding the economic and geopolitical models behind them, and using your critical thinking skills wisely (without becoming paranoid), you can build some strategies to control your information better. Here is a list of reasonable goals if you want to reduce the influence of algorithms on your information. It’s up to you to find the solution that goes with it!
Quiz 2 was derived from the scenario of use 2 and podcast 2.
The four pedagogical pathways showed educators how to use the quizzes in the classroom (via workshops facilitation), while reinforcing their knowledge base (via the provided responses to the quizzes). Rather than announcing a completed journey, they sought to suggest educational guidelines inferred from the research conducted, playing on the educators’ familiarity principle: though algorithms might be a new topic, teachers could rely and fall back on their educational strategies based on well-honed MIL practices. The pathways suggested activities and workshops for interactions with young people, including how to use the Dashboard (pedagogical document 4). The full “Algo-literacy prebunking kit” [29] also summarized the whole experiment with a poster, downloadable for educators and the general public (see Figure 4), to be used in all kinds of workshops, entitled “Algo-literacy for all in 10 key points” (https://savoirdevenir.net/crossover/, accessed on 1 May 2024).
The full “algo-literacy prebunking kit” was put together according to MIL design principles, in particular modularity, authenticity of documents, competence-based framework, and tool embedded in a larger context (not per se) in order to understand information and disinformation [27]. The accompanying documents, with teaching guidelines, were meant to entice educators into engaging with MIL literacy, so that they could overcome their lack of knowledge and confidence on the topic [17]. The prebunking notion [30,31] seemed fit to be introduced at the end of the process, in terms of helping users anticipate the role of algorithms by preparation and education as the best filter against disinformation. The point was to create new heuristics and a kind of educational preparedness that could be pedagogically sustainable, especially if taken in a larger MIL design that encompassed the societal and cultural context [32].

4. Discussion

Implementing scenarios of use was an effective method for addressing the main goals of project Crossover in terms of Media and Information Literacy. It made it possible to unravel some of the workings of algorithms, to clarify the interconnections between AL and MIL, and to test the working definition. It allowed the construction of a competence framework based on the felt experience of users and provided modular elements to design MIL interventions derived from experimentation.

4.1. By-Passing the “Black Box” of Algorithms

The four inquests yielded insights on algorithms that went beyond the “black box” metaphor [19,33], providing an “understanding of opacity” [34]. The analysis of traces left during the search process made it possible to infer a number of actions by algorithms that confirmed the initial hypotheses made by the developers and journalists. These authentic inquests participated in the empowerment of users by revealing and providing “evidence” of the action of algorithms, in the double sense of making visible and of providing proof [35]. This also emphasised the multi-stakeholder benefits of editorial collaboration and technical experimentation with a tool that was co-designed by all actors.
Using the experiences of developers and journalists has proved useful, as their everyday encounters with algorithms elicited unexpected outputs around real-life cases and enquiries [15]. Creating surprise or shock with faulty results and predictions can invite conversations and reflections on how algorithms work behind the scenes. Creating situations that elicit exchanges and co-learning can be achieved around themes that are likely to bring out the hidden role of algorithms, arousing curiosity or concern and motivating users to change behaviour and take measures to counteract them. This approach goes beyond “coping” [8], to encourage users to be critically aware of their online surroundings and to be active in their responses against disinformation as conveyed by algorithms.
In terms of transliteracy theory, the added insight was that what happens inside the system “black box” does not determine the whole of the process, especially when it comes to information search and curation [25]. The users do not need to know the full architecture of algorithms as developed by the platforms to understand its mechanics, especially in terms of outputs and services in their everyday life. However, the users do need to know how some basic functions (ranking, recommending, predicting) can affect their actions and the consequences algorithms might have in real life, for their civic engagement and consumption of news for instance. This is where interacting with the affordances of a tool like the Dashboard can provide some computational thinking that makes sense of the technology, its affordances and its dependencies.

4.2. Confirming the Definition of Algorithm Literacy

The initial two-tiered user-centric definition encompassed algorithmic functions and the cultural practices and imaginaries around them, which was confirmed throughout the experimentation. The distinction between awareness and knowledge [8] established in the first tier of the definition was validated by confronting representations with tactical real-life results. The actions of curation and engagement established in the second tier were also validated by a pragmatic, task-driven posture, in a logic of prevention and prebunking [30].
This definition proposed a functional and operational algorithm literacy for societal and cultural engagement [7]. As such, it incorporated the complex imbrication of transliteracy [21,22]: it fostered critical thinking about the level of multi-media services provided by algorithms (their functionalities, their finalities) and about the level of trans-domain reach of algorithms (their impact on information access, search and curation).
This definition also addressed the old MIL conundrum about “the technicist trap” [36], eschewing the tool dependency bias that comes from using media and other smart devices like the Dashboard. The trap was avoided by incorporating design principles from MIL and information and communications sciences, like verification processes or disinformation detection [17,37], with authentic cases where the tool is embedded in a larger contextual setting. The engineering dimension of algorithms was made explicit and explainable, the Dashboard becoming a kind of pedagogical tool that enabled demonstrations of how to obtain specific results from the codes and data [38]. Understanding the mechanics of algorithms makes it possible to embrace all the stakes of information circulation and consumption, not just the disinformation risks and biases. The four scenarios of use evinced the need for a modicum of technical skills, as journalists and developers had to revisit their professional practices, including their experiences of algorithms, towards “algo-journalism” (podcasts 3 and 4).

4.3. Fine-Tuning the Competence Framework with MIL Design

Developing algorithmic literacy implied meshing tactical experiences and active competences to deal with curated online environments. The competence framework explicitly articulated the transliteracy domains for algorithm literacy. It mixed cultural and communication competences (media literacy), organisational and search competences (information literacy), and operational and problem-solving competences (data literacy). The final framework showed the overlaps with Media and Information Literacy, and the specificities of algorithm literacy was key (see Table 2). This competence-based approach indicated how algorithmic literacy could be integrated within a MIL design and curriculum, rather than seen as a separate literacy, following the familiarity principle that could bring adoption and implementation among teachers and educators [20].
Becoming savvy in algorithm literacy can encompass different types of behaviours: detecting disinformation, verifying content, disclosing/exposing the results of search, responding/refuting the outcomes, and asking for transparency and accountability from platforms and services. It implies supporting a certain number of values, traditionally fostered by MIL, such as information integrity, quality data, freedom of expression, and media pluralism and diversity. It also suggests solutions that are personal (building resilience) and collective (building resistance). Beyond coping, users need to engage actively with their online surroundings, in particular to address platform developers and policy makers [1,39].
The development of algorithmic literacy thus enlarges the users’ understandings of the digital culture in which they are immersed, as a technical culture and a culture based on the economy of attention—a misnomer for systemic inattention. Such literacy needs to be part of basic literacy curricula for citizens’ empowerment, as they have to acquire the individual capacities and collective processes to resist algorithmic logics and mechanics when these pose a threat to information and weaponize disinformation. Confronting issues such as “transparency” and “accountability” cannot be achieved without a critical citizenship force [40,41].

5. Conclusions

The research did confirm the initial assumptions of the project. It showed that MIL can support algorithm literacy for the general users, including teachers and journalists focused on the fight against disinformation and in favour of quality information and search. It evinced a number of meta-competences that call attention to specific knowledge, attitudes, skills and values, especially when dealing with the three main types of algorithms under study, namely ranking, recommendation and prediction. It also showed how MIL design principles could build on the familiarity principle for teachers and educators to insert algorithm literacy into their curriculum and their practices.
The research also provided insights on how algorithm literacy, embedded in MIL, could open new perspectives on user agency. The four scenarios of use showed the workings of algorithms while by-passing the “black box” conundrum, to expose the processes that lead to specific search results in real life. Such scenarios could be extended to fields other than media that are structuring our relations to reality via digital platforms, such as the algorithms of tax administration, of dating services, of streaming movie recommendations, etc.
However, scenarios of use, empirically useful as they have proven to be when dealing with relatively untread territory, are not the panacea for ensuring full user agency. They need to be further tested in the classrooms and added to other MIL strategies and initiatives as the issue of disinformation calls on the rise in competences of a number of societal actors, like journalists, developers, fact-checkers, MIL experts and researchers, in order to forestall the platforms’ interests in maintaining algorithm opacity over their commercial services.
This connects algorithm literacy to crucial democratic goals, as the stakes ultimately are to preserve the citizens’ rights to access information and make decisions away from bias, disinformation and manipulation. As Generative Artificial Intelligence (GAI) systems gain in currency and spread to all sectors of society, the strategy used in the Crossover project still remains pertinent: GAI can be added to AL within a MIL approach (20), playing on the familiarity principle to help all sorts of professionals maintain an oversight over users’ daily activities with clear insights on how those systems impact their interactions with information.

Funding

This project has received funding from the European Union’s programme on the financing of Pilot Projects and Preparatory Actions in the field of “Communications Networks, Content and Technology” under grant agreement LC-01682253.

Data Availability Statement

The data are available on the website: https://crossover.social/ under the sections Podcasts, Quizzes and Methodology. Complementary data are available on Savoir*Devenir website: https://savoirdevenir.net/crossover/ (accessed on 1 May 2024).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. European Commission, DG-Connect. A Multi-Dimensional Approach to Disinformation—Report of the Independent High Level Group on Fake News and Online Disinformation; Publications Office of the European Union: Brussels, Belgium, 2018; Available online: https://digital-strategy.ec.europa.eu/en/library/final-report-high-level-expert-group-fake-news-and-online-disinformation (accessed on 1 February 2024).
  2. Seaver, N. Captivating Algorithms: Recommender Systems as Traps. J. Mater. Cult. 2019, 24, 421–436. [Google Scholar] [CrossRef]
  3. Dogruel, L.; Facciorusso, D.; Stark, B. ‘I’m still the master of the machine. ‘Internet users’ awareness of algorithmic decision-making and their perception of its effect on their autonomy. Inf. Commun. Soc. 2020, 25, 1311–1332. [Google Scholar] [CrossRef]
  4. Boyd, D.; Crawford, K. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Inf. Commun. Soc. 2012, 15, 662–679. [Google Scholar] [CrossRef]
  5. Hill, J. Policy Responses to False and Misleading Digital Content: A Snapshot of Children’s Media Literacy; Documents de Travail de l’OCDE sur l’éducation; OCDE: Paris, France, 2022; p. 275. [Google Scholar] [CrossRef]
  6. Crossover Project (2021–2022). Available online: https://crossover.social (accessed on 1 March 2024).
  7. Beer, D. The social power of algorithms. Inf. Commun. Soc. 2017, 20, 1–13. [Google Scholar] [CrossRef]
  8. Dogruel, L.; Masur, P.; Joeckel, S. Development and Validation of an Algorithm Literacy Scale for Internet Users. Commun. Methods Meas. 2022, 16, 115–133. [Google Scholar] [CrossRef]
  9. Cotter, K.M. Critical Algorithmic Literacy: Power, Epistemology, and Platforms. Ph.D. Dissertation, Michigan State University, East Lansing, MI, USA, 2020. Available online: https://search.proquest.com/openview/3d5766d511ea8a1ffe54c53011acf4f2/1?pq-origsite=gscholar&cbl=18750&diss=y (accessed on 1 March 2024).
  10. Nguyen, D.; Beijnon, B. The data subject and the myth of the ‘black box’ data communication and critical data literacy as a resistant practice to platform exploitation. Inf. Commun. Soc. 2023, 27, 333–349. [Google Scholar] [CrossRef]
  11. Matthews, P. Data literacy conceptions, community capabilities. J. Community Inform. 2016, 12, 47–56. [Google Scholar] [CrossRef]
  12. Shin, D.; Rasul, A.; Fotiadis, A. Why am I seeing this? Deconstructing algorithm literacy through the lens of users. Internet Res. 2022, 32, 1214–1234. [Google Scholar] [CrossRef]
  13. Head, A.; Fister, B.; MacMillan, M. Information Literacy in the Age of Algorithms. 2020. Available online: https://projectinfolit.org/pubs/algorithm-study/pil_algorithm-study_2020-01-15.pdf (accessed on 1 March 2024).
  14. Swart, J. Experiencing Algorithms: How Young People Understand, Feel About, and Engage with Algorithmic News Selection on Social Media. Soc. Media Soc. 2021, 7, 20563051211008828. [Google Scholar] [CrossRef]
  15. Bucher, T. The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Inf. Commun. Soc. 2017, 20, 30–44. [Google Scholar] [CrossRef]
  16. Brisson-Boivin, K.; McAleese, S. Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy; MediaSmarts: Ottawa, ON, Canada, 2021; Available online: https://mediasmarts.ca/sites/default/files/publication-report/full/report_algorithmic_awareness.pdf (accessed on 1 March 2024).
  17. Nygren, T.; Frau-Meigs, D.; Corbu, N.; Santoval, S. Teachers’ views on disinformation and media literacy supported by a tool designed for professional fact-checkers: Perspectives from France, Romania, Spain and Sweden. SN Soc. Sci. 2022, 2, 40. [Google Scholar] [CrossRef] [PubMed]
  18. Moylan, R.; Code, J. Algorithmic futures: An analysis of teacher professional digital competence frameworks through an algorithm literacy lens. Teach. Teach. Theory Pract. 2023. [Google Scholar] [CrossRef]
  19. Cotter, K.; Reisdorf, B.C. Algorithmic Knowledge Gaps: A New Horizon of (Digital) Inequality. Int. J. Commun. 2020, 14, 745–765. Available online: https://ijoc.org/index.php/ijoc/article/view/12450 (accessed on 1 May 2024).
  20. Frau-Meigs, D. User Empowerment through Media and Information Literacy Responses to the Evolution of Generative Artificial Intelligence (GAI); UNESCO: Paris, France, 2024; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000388547 (accessed on 1 March 2024).
  21. Frau-Meigs, D. Transliteracy as the new research horizon for media and information literacy. Media Stud. 2012, 3, 14–27. Available online: https://hrcak.srce.hr/ojs/index.php/medijske-studije/article/view/6064 (accessed on 1 March 2024).
  22. Frau-Meigs, D. Transliteracy and the digital media: Theorizing Media and Information Literacy. In International Encyclopedia of Education, 4th ed.; Tierney, R., Rizvi, F., Ercikan, K., Eds.; Elsevier: Amsterdam, The Netherlands, 2022. [Google Scholar]
  23. Carroll, J.M. Making Use: Scenario-Based Design of Human-Computer Interactions; The MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  24. Alexander, I.F.; Maiden, N. (Eds.) Scenarios, Stories, Use Cases through the Systems Development Life Cycle; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  25. Hargittai, E.; Gruber, J.; Djukaric, T.; Fuchs, J.; Brombach, L. Black box measures? How to study people’s algorithm skills. Inf. Commun. Soc. 2020, 23, 764–775. [Google Scholar] [CrossRef]
  26. Lloyd, A. Chasing Frankenstein’s Monster: Information literacy in the black box society. J. Doc. 2019, 75, 1475–1485. [Google Scholar] [CrossRef]
  27. Frau-Meigs, D.; Corbu, N. (Eds.) Disinformation Debunked: MIL to Build Online Resilience; Routledge: London, UK, 2024. [Google Scholar]
  28. Potter, W.J.; Thai, C.L. Reviewing media literacy intervention studies for validity. Rev. Commun. Res. 2019, 7, 38–66. [Google Scholar] [CrossRef]
  29. Savoir Devenir. Algo-Literacy Prebunking Kit. 2022. Available online: https://savoirdevenir.net/wp-content/uploads/2023/03/PREBUNKING-KIT-ENG.pdf (accessed on 1 March 2024).
  30. Kahne, J.; Hodgin, E.; Eidman-Aadahl, E. Redesigning civic education for the digital age: Participatory politics and the pursuit of democratic engagement. Theory Res. Soc. Educ. 2016, 44, 1–35. [Google Scholar] [CrossRef]
  31. McGrew, S.; Breakstone, J.; Ortega, T.; Smith, M.; Wineburg, S. Can Students Evaluate Online Sources? Learning From Assessments of Civic Online Reasoning. Theory Res. Soc. Educ. 2018, 46, 165–193. [Google Scholar] [CrossRef]
  32. Frau-Meigs, D. How Disinformation Reshaped the Relationship between Journalism and Media and Information Literacy (MIL): Old and New Perspectives Revisited. Digit. J. 2022, 10, 912–922. [Google Scholar] [CrossRef]
  33. Pasquale, F. The Black Box Society: The Secret Algorithms That Control Money and Information; Harvard UP: Cambridge, MA, USA, 2015. [Google Scholar]
  34. Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 2016, 3, 2053951715622512. [Google Scholar] [CrossRef]
  35. Le Deuff, O.; Roumanos, R. Enjeux définitionnels et scientifiques de la littératie algorithmique: Entre mécanologie et rétro-ingénierie documentaire. tic&société 2021, 15, 325–360. [Google Scholar]
  36. Masterman, L. Teaching the Media; Routledge: London, UK, 1985. [Google Scholar] [CrossRef]
  37. Wineburg, S.; McGrew, S. Lateral Reading and the Nature of Expertise: Reading Less and Learning More When Evaluating Digital Information. Teach. Coll. Rec. 2019, 121, 1–40. [Google Scholar] [CrossRef]
  38. Rieder, B. Engines of Order: A Mechanology of Algorithmic Techniques; Amsterdam University Press: Amsterdam, The Netherlands, 2020. [Google Scholar]
  39. European Commission, DG-EAC. Guidelines for Teachers and Educators on Tackling Disinformation and Promoting Digital Literacy through Education and Training; Publications Office of the European Union: Brussels, Belgium, 2022; Available online: https://data.europa.eu/doi/10.2766/28248 (accessed on 1 March 2024).
  40. Kemper, J.; Kolkman, D. Transparent to whom? No algorithmic accountability without a critical audience. Inf. Commun. Soc. 2019, 22, 2081–2096. [Google Scholar] [CrossRef]
  41. Kitchin, R. Thinking critically about and researching algorithms. Inf. Commun. Soc. 2017, 20, 14–29. [Google Scholar] [CrossRef]
Figure 1. Data collection strategy, using two types of sources (API and user-meter). https://crossover.social/methodology/ (accessed on 1 March 2024).
Figure 1. Data collection strategy, using two types of sources (API and user-meter). https://crossover.social/methodology/ (accessed on 1 March 2024).
Digital 04 00026 g001
Figure 2. Dashboard of YouTube. https://crossover.social/methodology/ (accessed on 1 March 2024).
Figure 2. Dashboard of YouTube. https://crossover.social/methodology/ (accessed on 1 March 2024).
Digital 04 00026 g002
Figure 3. Algo-literacy meta-competences framework. Transliteracy colour code: lime green for Media (first two), light blue for Documents (middle two), light parma/violet for Data (last two).
Figure 3. Algo-literacy meta-competences framework. Transliteracy colour code: lime green for Media (first two), light blue for Documents (middle two), light parma/violet for Data (last two).
Digital 04 00026 g003
Figure 4. Algo-literacy for all in 10 key points. (https://crossover.social/algo-literacy-for-all-in-10-points/, accessed on 1 May 2024).
Figure 4. Algo-literacy for all in 10 key points. (https://crossover.social/algo-literacy-for-all-in-10-points/, accessed on 1 May 2024).
Digital 04 00026 g004
Table 1. Matrix of scenarios of use.
Table 1. Matrix of scenarios of use.
MIL algo-literacy matrix.
(that can be transferred to classroom interventions)
Scenario of useReal life eventAlgorithmic focusMIL competencesLarger societal issues
1
https://crossover.social/podcast/crossover-podcast-episode-1-the-keyboard-fighters/ (accessed on 1 May 2024)
Searching
by keywords
on search engines like Google

keyword: Liberty convoy
Article 1
15 February 2022
Podcast 1
13 July 2022

The keyboard fighters
Based on investigation looking at ‘Freedom Convoy’ threats to invade Brussels
FOCUS ON RANKING ALGORITHMS
and SEARCH

What is a keyword, its use in information, difference between a keyword and a hashtag…

Digital 04 00026 i001
Analysis of mechanisms of disinformation and debunking processContrast between URL (virtual) and IRL (real) mobilizations
2
https://crossover.social/podcast/crossover-podcast-episode-2-dangerous-liaisons/ (accessed on 1 May 2024)
Searching for affinity communities, groups, influencers, actors via # on social networks like Youtube

Hashtag: RT Russia
Article 2

podcast 2
3 November 2022
Algorithms and propaganda: dangerous liaisons
Based on investigation looking at ban on RT during war in Ukraine and subsequent replacement by CGTN Français
FOCUS ON THE ROLE OF PARTICIPATION on social networks TRENDS

What is engagement, how it affects ranking and dissemination, how communities influence trends…what is an echo chamber

Digital 04 00026 i002
- Understand the economy of attention
- Analysis of mechanisms of cyber-propaganda
- Basic functioning of engagement and amplification via algorithms
- State propaganda and algorithmic recommendation
Algorithmic “addiction” to state media that propagate disinformation
3
https://crossover.social/podcast/crossover-podcast-episode-3-how-algorithms-changed-my-job/ (accessed on 1 May 2024)
Searching for trends and influential accounts on forums such as Odysee

Looking for personalities and influencers such as
Dries Van Langenhove
Article 3
8 June 2022
Podcast
17 January 2023

How algorithms changed my work

Based on reflexive discussions about using algorithms to do algo-journalism
And dealing with conspiracy theories
FOCUS ON RECOMMANDATION ALGORITHMS
and ATTENTION

How prediction differs from recommendation, how it informs behaviour of algos (and users?)

Digital 04 00026 i003
- Understand the role of communities and influencers on information/disinformation
- Develop know-how to get more diversified information…
Economics of attention /recommendation
4
https://crossover.social/podcast/crossover-podcast-episode-4-algorithm-watchers-digital-fact-checking-prediction-algorithms-disinformation/ (accessed on 1 May 2024)
Searching for disinformation with a smart tool like Dashboard


Google auto-complete

Keyword: Donbass
Article 4
29 September 2022
Podcast 4
22 February 2023

The Algorithm watchers

Based on reflexive discussions on experience of developers using the Dashboard and interacting with other stakeholders
FOCUS ON ALGORITHMIC PREDICTION
And BIAS and PROPAGANDA

What about Neutrality of algorithms?
How does the dashboard prove that algorithms change the information game and help understand the way they work?

Digital 04 00026 i004
- Understand how algorithms can bias the information and push disinformation
- Identify manipulations
- Objectify the work of journalists
- Using technical tools to fight disinformation
- Uncovering the functioning of algorithms across platforms
- dealing with digital fact-checking, prediction algorithms and disinformation
Bias, manipulation
Table 2. Quiz 1 associated with podcast 1.
Table 2. Quiz 1 associated with podcast 1.
GoalsSolutions
Limiting the number of data collected from your devices to reduce targetingSetting your cookies to limit tracking
Browsing anonymouslyUsing a VPN
Not falling for sensationalist news Watching out for information that arouses a lot of emotion and verifying it
Going beyond the beaten path, varying your sources of informationOpening your community to people with different profiles and snooping elsewhere than in the first page of Google or searching on other sites
Making sure that informing yourself is a voluntary act that respects clear rules Mobilising for an increased regulation of algorithms, for more transparency about their impact
Table 3. Quiz 2 associated with podcast 2.
Table 3. Quiz 2 associated with podcast 2.
Fake accounts and bots are created by the millions every day and are often the basis of raging debates. What are the signs that should make you suspicious?
-
They don’t have a photo
-
They never publish, and suddenly broadcast a lot of messages on a “hot” topic
-
They speak Russian or Chinese
-
They have many friends or followers, but there is very little activity between their profile and their supposed community
-
They are only interested in one type of topic
-
They share hundreds of posts per day
Answer: the correct answers (in bold) are only clues. The more of them that converge, the higher the probability that you are dealing with a bot.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Frau-Meigs, D. Algorithm Literacy as a Subset of Media and Information Literacy: Competences and Design Considerations. Digital 2024, 4, 512-528. https://doi.org/10.3390/digital4020026

AMA Style

Frau-Meigs D. Algorithm Literacy as a Subset of Media and Information Literacy: Competences and Design Considerations. Digital. 2024; 4(2):512-528. https://doi.org/10.3390/digital4020026

Chicago/Turabian Style

Frau-Meigs, Divina. 2024. "Algorithm Literacy as a Subset of Media and Information Literacy: Competences and Design Considerations" Digital 4, no. 2: 512-528. https://doi.org/10.3390/digital4020026

Article Metrics

Back to TopTop