1. Introduction
Relatively new Information and Communications Technologies (ICTs), such as podcasts, videos and simulations are increasingly appearing as assessments in university science subjects [
1,
2]. There seem to be two drivers for this. First, developments in ICTs mean learners can create new, more complex, (usually dynamic) media forms more efficiently and with relatively user-friendly, low-cost software [
3] and second is the push for university graduates to have a set of ‘soft skills’ in addition to ‘content knowledge’ in order to be ready for 21st-century jobs [
4]. In terms of communication, for instance, recent reforms related to tertiary science education in Australia explicitly stipulate that graduates must be able to communicate ‘to a range of audiences’, ‘for a range of purposes’ and ‘using a variety of modes’ [
5] by the end of their degrees in order to be prepared for diverse employment outcomes [
6,
7,
8].
These assessments are new territory because although traditional university assessments for science subjects usually incorporate a range of representations, these are often in a static form, like a poster, or mediated by the learner, such as in a multimodal presentation [
9]. Rather, the digital products discussed in this paper are both dynamic and standalone (can be played from beginning to end). A review of the use of the ‘Dynamic Standalone Product’ (DSP) in university science subjects reveals that they take on a variety of forms, with the most common including three or four main modes: image, narration, text on screen (labelling) and either animation or video [
10]. Such a resource might resemble a PowerPoint presentation with overlaid narration that plays from beginning to end, or a short film which includes different scenes and role play. For a discussion on naming conventions and an outline of different types of digital products, see [
11]. There is sufficient evidence to suggest constructing a DSP is advantageous for learners in terms of developing disciplinary knowledge, increasing engagement, facilitating collaborative skills (when working in groups) and developing communication skills, e.g., [
12,
13,
14].
For example, Hoban et al. [
12] found that constructing a ‘slowmation’ (a type of DSP that involves compiling images together to create a ‘slow’ animation) helped to rectify alternative conceptions related to the phases of the moon. In [
14], Mills et al. find that engagement in the slowmation task improves interest in science. Further, it is argued that the dynamic and multi-representational elements of these resources might be uniquely responsible for these potential benefits [
14].
To understand why DSPs have such potential, we might look to the research on representations in science education. A strong link between disciplinary understanding, epistemology and communication skills at all levels of education is identifiable in this literature, e.g., [
15,
16,
17,
18,
19,
20,
21,
22,
23]. That is, a range of studies show that in working with language and other representations, students develop communication skills, improve conceptual understanding and are likely to gain a more sophisticated understanding of the nature and structure of scientific knowledge. For example, Pelger and Nilsson found, in their study, that intentional instruction on
communication techniques actually improved subject understanding amongst university science students [
21]. These benefits have been found at all educational stages. For instance, a study on primary aged students shows that the use of a multimodal task as an assessment improved expression, encouraged refinement of thinking and developed knowledge about the topic [
20]. When ninth graders were asked to explain the ‘work-energy’ concept using a range of representations, students gained both quantitative understanding and greater epistemological awareness [
22]. Klein and Kirkpatrick explain that this might be because ‘(…) representations do not simply transmit scientific information; they are integral to reasoning about scientific phenomena’ [
23].
It seems natural to use technology to facilitate working with representations. However, there is also no guarantee that the potential benefits will be realised. Research on the use of technology more generally (in science education) has established that beneficial or successful integration depends on a variety of complex factors [
24,
25]. However, most of this kind of research primarily focuses on the use of technology by the instructor, so what we know about student-created products is even more limited. For instance, in the multimedia literature, Cognitive Load Theory is used to justify elements of multimedia design for instructors, such as avoiding irrelevant information and combining visual and auditory modalities [
26]. However, this field of research does not provide a basis for how knowledge is represented, by a maker, in a multimodal assessment product. Thus, we know little about the neophyte scientist and communicator; what do they understand about multimodal science communication? How is this reflected in the product? How does this understanding develop? Reyna and Meier [
11], in their review of the field, state that the field of Learner Generated Digital Media (LGDM) is still very much in its infancy. They identify a lack of theoretical underpinning, which leads to variable results when DSPs are used as assessments. The lack of understanding of the principles that underlie effective digital products is problematic because without them, it is often left to the subjective views of individual instructors judging the overall ‘feel’ of the product and this limits what can be gained from the process (p. 102).
In order to understand the relationship between representations, communication and knowledge, we developed a preliminary theoretical framework from the close analysis of two DSPs, which will ultimately help develop our understanding of principles of dynamic media creation in order to inform related pedagogies and assessment practices. Because our particular interest is the intersection between knowledge and communication, draw on a sociologically-based theory, known as ‘Legitimation Code Theory’, which is designed to decode knowledge practices.
Legitimation Code Theory (LCT) is a sociologically-based theoretical framework grounded in the social realist philosophy and focused on knowledge [
27,
28]. This means that it considers knowledge as a central consideration when considering social practices (such as education). Maton emphasises that this is a point of difference in education research, which is ‘knowledge blind’ and where psychologically-based approaches, such as Cognitive Load Theory, focus on what is happening ‘in the mind’, whilst other sociologically-based approaches focus instead on power relations, both overlooking knowledge as its on object of study. That the framework falls under the umbrella of ‘social realism’ means that knowledge is considered both socially constructed and real, in that it exists ‘outside’ of the minds of either individual or the collective. As a sociologically-based framework, its goal is to find ‘what lies beneath’, or, as Maton puts it, what the ‘rules of the game’ are [
27]. LCT is also a practical framework, where revealing or making explicit these characteristics has implications for practice [
28]. For instance, in a study by Howard and Maton [
29] on technology use in Australian high schools, they found a significant difference between teachers of different disciplines. Codifying teachers (and student) responses to survey questions revealed that this was due to their different implicit disciplinary identities. Making these characteristics explicit then allows better understanding of the use of technology in these classrooms. In terms of student understanding, Georgiou et al., [
30] found that this was facilitated by identifying and characterising a particular element of knowledge, its abstraction. The research demonstrated that there was an underlying code in terms of abstraction when students answered typical exam-style questions: abstract, but not too abstract (e.g., the referring to the energy of phase change, rather than the employment of the ideal gas law). Due to its focus on knowledge and its practical utility, LCT was considered an appropriate framework to shed light on DSPs used as assessments in the tertiary context.
LCT consists of ‘dimensions’ which focus on a different element of practices known as its organizing principles. Three of these dimensions, Specialization, Semantics and Autonomy, have been significantly developed and applied in empirical research to address issues in education (and beyond). Specialization focuses on ‘knowledge-knower structures’ and is founded on the premise that ‘practices are about or oriented towards something and by someone’ [
28] (p. 12). Thus, analytically, Specialization conceptualizes the relationships between practices and their object (known as epistemic relations) and practices and their subject (known as social relations). For example, Physics is a discipline known to be represented by stronger epistemic relations and weaker social relations: ‘possession of specialized knowledge, principles or procedures concerning specific objects of study is emphasized as the basis of achievement, and the attributes of actors downplayed’ [
28] (p. 13). As a contrastive example, weaker epistemic relations and stronger social relations reflect cases where ‘specialized knowledge and objects are downplayed and the attributes of actors are emphasized as measures of achievement, whether viewed as born (e.g., ‘natural talent’) cultivated (e.g., ‘taste’) or social (e.g., ‘feminist standpoint theory’)’ [
28] (p. 13). In Semantics, semantic structures are explored, whose organizing principles are determined by two constructs which vary in strength: semantic gravity and semantic density. Semantic gravity refers to the degree to which meaning relates to its context (the stronger the semantic gravity, the more strongly meaning relates to its context, the weaker the semantic gravity, the less strongly meaning relates to its context). To focus our analysis, we are employing the concept of ‘semantic density’ from LCT, because semantic density conceptualises complexity and the aim of our particular assessment products is to communicate complex ideas to a non-expert audience. Maton and Doran [
31] elaborate on semantic density:
‘Semantic density’ … conceptualizes complexity in terms of the condensation of meanings within practices (symbols, concepts, expressions, gestures, actions, clothing, etc.). The strength of semantic density can vary along a continuum. The stronger the semantic density (SD+), the more meanings are condensed within practices; the weaker the semantic density (SD−), the fewer meanings are condensed. Put another way, semantic density explores the relationality of meanings: the more meanings are related, the stronger the semantic density. (p. 49).
LCT studies suggest that negotiation of semantic density; strengthening (increasing complexity) and weakening (decreasing complexity or ‘unpacking’) is a key aspect of building knowledge in classroom practices [
32,
33,
34,
35,
36]. In a study by Maton [
36], for example, teacher presentations exhibit stronger and weaker moments of semantic density reflecting the ‘unpacking’ (describing elements of the concept of ‘cilia’ as little hairs that perform specific functions) and ‘repacking’ (a process whereby these functions are summarised with others in a comparative table) of scientific concepts. Theoretically, this is described as the ‘semantic wave’ and has subsequently been identified as important in a range of different contexts.
Though diverse, studies in LCT have utilized some common methodological approaches. In terms of coding practices, a ‘translation device’ might be used. The translation device acts as a ‘translation’ of theoretical constructs (such as semantic density), to data. Though translation devices have not yet been developed for all constructs (nor are they necessary for each context) in LCT, a translation device
has been developed for the analysis of language and thus, in this case, the translation device offered the principles by which coding of the semantic density of language occurs to represent different levels of complexity [
31]. The main distinction occurs between ‘everyday’ and ‘technical’ language, with subdivision in each of these two main categories. The approach is an analysis of discourse, but not necessarily ‘discourse analysis’, since the analysis is sociological, focusing on characterizing the knowledge rather than the way it is expressed linguistically. Nevertheless, being an analysis of language, influences from Linguistics, specifically, Systemic Functional Linguistics, is apparent [
31]. Systemic Functional Linguistics (SFL) is a theory of language commonly used in conjunction with LCT [
37,
38]. Some key ideas that originate from SFL and are relevant to this paper include ‘technicality’ and ‘informational density’, which were characteristics identified by SFL scholars as being key features of scientific discourse, as found in a range of ‘expert’ science texts, such as textbooks [
39,
40,
41,
42]. Technicality refers to the degree of specialization of meaning, where the crudest distinction occurs between ‘everyday’ meaning and meaning constructed within a particular field (e.g., of Science) [
41,
42]. Lexical density, according to Halliday and Matthiessen in SFL [
42], refers to how much meaning is ‘packed’ within a text, and essentially ‘counts’ the number of ‘content’ words as a proportion of total words in a ranking clause. Theoretical studies have revealed certain profiles of language, such as the fact that spoken language is less lexically dense than written. Identifying these characterisations is important for learning, because, as Shanahan and Shanahan state [
43], the complexities of language increase through schooling whilst explicit instruction in this area decreases. In science, it is generally understood that communication skills, particularly beyond the written form, are difficult to assess. In the assessment criteria of the task that is the subject of this paper, for example, though clearly stating that the text requires technical terms to be well-defined and pharmacological concepts ‘conveyed to a general audience’, interviews with the creators of the resources, as analysed in a separate study, give a sense that what is valued instead is ‘getting the science right’ [
10]. The perception is that the ‘communication’ part is not explicitly taught or assessed:
if you are trying to teach students effective communication, we did not really get an assessment brief I did not really know what I was actually meant to be producing.
If you are teaching these skills- in my experience they do try and teach communication skills but a lot of the time they do not mark towards the communication skills, they mark towards the content that is communicated. So even if it is communication poorly, you can still go okay.
To consider assessment of digital products that include complex arrangements of representations and dynamic elements, like the DSPs discussed here, is substantially complex. To address this challenge, a range of different approaches is necessary, and as some have argued, this requires a new interdisciplinary field of research [
44]. This work reflects the development of a preliminary framework, based on LCT, that aims to provide clarity around elements that have been difficult to assess in more complex assessment products, such as DSPs. The analysis is a first step for providing a literature base that supports the use of DSPs as assessments in higher education.
2. Materials and Methods
The theoretical approach was developed using a close analysis of two student-generated texts submitted as part of a third-year pharmacology subject at an Australian university. These texts were collected as part of a wider program of research supported by a national grant focused on learning through student-generated digital media [
45]. As part of this project, the full sample (
n = 41) of digital media products, collected over a three-year period (2015–2017), were characterised in an iterative process involving the four researchers. This process produced a ‘Variety-Quality’ or ‘VQ’ Matrix, which acted as a way to easily glean some of the characterising features of the products. These products varied in many ways, as they were collected from a variety of universities, subjects and included a range of different types of assessments. For example, nutrition students constructed a digital product to suggest what a culturally appropriate diet should include for a resident in a retirement home and pre-service science teachers constructed a digital resource addressing a common secondary science misconceptions. This variety is captured in a separate paper [
10]. The two products selected to develop the conceptual framework were from the same university subject, assessment and year but one achieved a higher mark.
Table 1 depicts some characterisations drawn from the VQ Matrix for these two products. Excerpts from both are provided in the clip in the
Supplementary Materials. Notably, they are both approximately five minute, standalone creations completed by students in a third-year pharmacology assessment at an Australian university and their purpose was to summarise a technical literature review and communicate complex information to a non-specialist audience (their peers in the subject, who are assumed to have general scientific knowledge only). In addition to these products, course documents (subject outline and assessment rubric) were collected and interviews were conducted with the subject coordinator and creator of the resources. Excerpts from these additional documents will be used for illustrative purposes only. Reporting of these analyses can be found in [
46,
47].
Various software programs were used to analyse the data. Given the still-developing field of multimodal research, no one product was able to perform all necessary analytical functions. The transcript, representing the verbal narration, was structured by clause and individual frames (or a group of individual frames) were then matched to each clause. This structure, including time stamps, was analysed in Excel for the semantic density analysis for narration. These data were ultimately imported into NVivo for further analysis, specifically for the semantic density analysis for the images, which occurred at the frame level. These analyses were returned to Excel, to consolidate and manipulate the data for the quantitative analysis. The narration and images were considered separately and then subsequently integrated in an additional analytic process.
In terms of the coding, because the analysis was based on LCT, this might be considered ‘deductive coding’, where the key stages include ‘developing the code manual’ and ‘testing the reliability of codes [
48]. Each part of the analysis was conducted in consultation with the relevant experts (SFL, Science or both) and involved initial coding, negotiation and final allocations. After negotiations, final codes were established at or near 100% agreement.
All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved in May 2016 by the Human Research Ethics Committee of the University of Wollongong (protocol number HE16/165).
4. Discussion
Analytical approaches in the fields of technology and science education in terms of representing knowledge are still in their infancy and, as such, support for students and teachers in relation to developing and assessing effective multimodal communication in digital products is not well developed. This is important because this area, working with multiple representations facilitated by technology, has potential to be holistically beneficial to a student in terms of their science education. In this paper, we developed a preliminary conceptual framework that draws on LCT in order to understand how to assess student understanding of science and communication through digital products. In this section, therefore, we discuss both what this type of analysis can tell us about the practical issue of DSPs as assessments and what the affordances and limitations are of this particular theoretical approach in terms of research in multimodality and technology use.
4.1. Multimodal Digital Assessments
The DSPs studied in this project involve the use of technology to allow knowledge to be represented dynamically through image, text and narration. As mentioned in the background section of this paper, the potential for digital products as authentic and holistic assessment is being increasingly recognised. In the words of a student undertaking this task: ‘this sort of task…I learn better by researching it and figuring it out myself than just being taught it because I have to get it, because if I don’t, I don’t get the marks’ [
10].
However, as discussed in the Introduction, the assessment of communication in science is often sidelined due to a focus on ‘content’ and there are limited frameworks in the literature which can be utilised. In this paper, we focus on one element, understanding the nature of communicating complex scientific concepts. In this space, the LCT concept of semantic density is employed, in order to more reliably assess the level of understanding represented in the products, of both the science and communication (how complex and how much ‘condensation’ is appropriate for each audience). Analysing the digital products in terms of the relative semantic density expressed in both the image and the narration first revealed the two texts to have different overall ‘quantities’ of average semantic density. The considerable difference in the levels of semantic density in the two texts (as well as across the whole sample) demonstrates that judgement around the level of semantic density, or technicality, used in a text for a specific audience is inconsistent. These quantities could be used as a crude measure of the level of technicality for the students, calling on them to consider whether this was appropriate or not, for example when compared to other texts for the same audience or a model text provided by the instructor.
More importantly, two distinct techniques that ‘control’ complexity in some way were apparent in both these texts: negotiation and building. They provide us with insight into how complexity is communicated effectively. In the Malaria text, these two techniques were used to great effect, as signalled by the full marks awarded for the assessment (
Table 1). One negotiation technique involved ‘place-holding’: using common-sense language or images at points where technical terms or complex processes were introduced. When listing the names of the various drugs, for example, a picture of a pill box was shown. This idea is supported by comments made by the creator of Malaria in response to a question about how they attended to audience:
So a lot of the decisions I made was just because if you are an unspecialised audience you don’t really care about the science behind it… so it’s not a detailed image, there’s not information image: it’s just more this is a mosquito, that’s a mosquito, so I’m saying mosquito here’s a picture of a mosquito.
This negotiation attempt acknowledges that the text is at a relatively semantically dense section (in the section the creator is referring to, the term ‘female anopheles mosquito’ is used) and negotiates this relatively strong semantic density by simplifying the message in the image (mosquito). In the Malaria text, there are many of these attempts at negotiation; the use of a ‘placeholder’ allows easily accessible meaning to be condensed into technical terms. The second technique involves the employment of less technical sections preceding and following sections with relatively stronger semantic density, as outlined in the previous section and in the peaks in
Figure 1. This could reflect one aspect of the required ‘balance of detail’ across the text, as specified in the rubric.
However, at Point A3 we see instead a different technique, building. At this point, there is no negotiation or common-sense meaning represented in the image and both image and narration are relatively technical. In this section, the sustained technicality is necessary because a complex idea needed to be communicated and this required careful building across image and language and throughout the text. However, this only happened once in the text and, as such, points of negotiation were intentional acts to climax to the ‘building’ activity that communicated the central message of the text.
In MS, we see evidence of multiple points of building but not negotiation of complexity. Point B1, for example, builds complexity in reflecting the process described in the narration in the imagery. However, whole process is described in only two clauses (this includes the clause in Point B1 and the previous clause only). The clause following point B1 represents a completely different process. In fact, this whole process is described in less than 15 s, only to be repeated immediately after. What is significant about this is that the text essentially consists of a series of such explanatory sections; very complex ‘building’ processes that are neither linked to each other or negotiated by the narration or image. The narration stays extremely semantically dense; it is not clear that there are any attempts to, as the assessment criteria state: define ‘Technical terms’, use ‘language appropriate for a general audience’ and ensure ‘Material is relevant and an appropriate amount is provided (balance of detail vs. general overview)’. This resource also does not make use of labels. This is ultimately a less successful attempt at communicating an idea to a non-specialised audience, though the complexity is there, the level of technicality (semantic density) is misjudged and the building is too abundant and too rapid to have the same effect as Malaria.
Semantic density could potentially offer a language with which to communicate how complexity manifests across multimodal forms in science. This could be useful in assessment in order to provide instructors with more specificity around communicative elements of a task, as well as provide pedagogical resources to assist students in identifying technicality and complexity, and how to negotiate them. For instance, the quantification of complexity could act as a quick check of the level of complexity of texts created as communication objects for particular audiences. Different ‘complexity’ ranges could be identified as more or less appropriate, flagging to the maker that the text should be further developed. The two techniques identified here could be included as part of a ‘toolkit’ of techniques that can be provided to students to help them decide how to consider the audience when producing communicative texts. More generally, this knowledge helps makes these elements of knowledge more explicit. That is, the ‘rules of the game’ are made more visible to students, providing them access to otherwise hidden disciplinary ways of knowing and representing.
4.2. Use of a New Approach to Analyse Digital Products
The second matter we will discuss involves considering the merits of this preliminary conceptual framework to analyse digital products. The previous section demonstrated its utility in the practical sense of assessing communication skills, and this section, therefore, focuses on the theoretical and methodological implications, benefits and limitations of LCT.
Theoretically, LCT is a ‘socially grounded’ approach focused on ‘decoding’ knowledge. In this particular case, we were interested in one aspect of these texts: how meaning or complexity is built. Semantic density was, therefore, used as a construct with which to understand this aspect. In terms of the methodology, there are two key strengths that are important to highlight. First, the ‘translation device’ offers a way of limiting subjectivity and increasing transparency in methods involving coding. The coding activity was very consistent, and few discrepancies were found, avoiding an otherwise problematic issue widely known to affect qualitative methods. The second main methodological advantage was the nature of assigning ‘values’ of semantic density. Assigning a value to data affords the advantage of further analysis and of more straightforward communication of key ideas. In this case, the ‘degree’ of complexity was quantified and allowed a clear representation of the degree of average semantic density in the text, as well as how it changed throughout the text. The sliding scale present in LCT across a range of concepts also allows for infinite gradations, offering the possibility of relatively simple or significantly sophisticated analysis. Further, since we are simply discussing ‘relative’ values of semantic density, we are comparing data points to each other, rather than to an absolute. These advantages are becoming increasingly recognized in research where knowledge is key [
49].
4.3. Limitations and Further Research
As a ‘first step’ in the analysis of a complex object, both the practical and theoretical elements of this study exhibited limitations. In terms of the practical aim, to clarify characterizations of how complexity is communicated in a student-generated digital product, known as a DSP, though detailed, the analysis was based on only two samples. Thus, the principles that were identified are likely only a subset of a larger set of principles.
Theoretically, dealing with meaning making across modes was a challenge. Often, it was difficult to know how to ‘treat’ a resource (or indeed how to label them). In the MS text, for example, the ‘images’ were actually placed consecutively to form a ‘slow animation’ or ‘slowmation’ [
12]. In treating these stills as images, it is possible that we might omit consideration of factors associated with how ideas progress over time. In terms of coding of the images, though our two samples proved relatively straightforward, our categories were also quite broad. Particularly in terms of the ‘simple scientific’ and ‘complex scientific’ categories, the two relatively strong semantic density categories. Determinations of complexity across different disciplines (e.g., a chemical representation of a molecule or a display of a graph), could be a challenge. Furthermore, the use of language as image (text on screen) would be similarly difficult to manage, as would any assumption about the ‘primary’ of the resource used (e.g., narration as being the ‘main carrier’ of meaning). The quantification of semantic density could also be open to critique; such quantification isn’t common so how to treat it, quantitatively and statistically, is still to be established.
Further research could continue this analysis with a larger number of texts with more complex combinations of modes (including animation), to confirm the existence of these techniques for successfully communicating complex scientific concepts in dynamic media, as well as identify any others. Other elements of knowledge, including the degree of abstraction or the making of ‘interpersonal’ meaning could also be explored in order to more explicitly capture and, therefore, more precisely assess these aspects of knowledge, where appropriate. Increasing our understanding of more ‘open’ assessments such as these DSPs is important if we want to encourage a more ‘holistic’ teaching and learning experience for university students.