Next Article in Journal
Application of Quantitative Computer-Based Analysis for Student’s Learning Tendency on the Efficient Utilization of Mobile Phones during Lecture Hours
Next Article in Special Issue
Earthquake Hazard Knowledge, Preparedness, and Risk Reduction in the Bangladeshi Readymade Garment Industry
Previous Article in Journal
The Influence of Reclaimed Asphalt Pavement on the Mechanical Performance of Bituminous Mixtures. An Analysis at the Mortar Scale
Previous Article in Special Issue
Partial Grazing Exclusion as Strategy to Reduce Land Degradation in the Traditional Brazilian Faxinal System: Field Data and Farmers’ Perceptions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Geography Textbook Tasks Promote Comparison Competency—An International Analysis

Institute for Geography Education, University of Cologne, Gronewaldstraße 2, 50931 Köln, Germany
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(20), 8344; https://doi.org/10.3390/su12208344
Submission received: 25 August 2020 / Revised: 1 October 2020 / Accepted: 7 October 2020 / Published: 10 October 2020
(This article belongs to the Special Issue Natural Risk Perception and Geography Education)

Abstract

:
Comparison is an important competency for gaining and linking knowledge. It can be learned in geography classes to help students understand complex concepts and develop autonomous geographical thinking. However, we do not currently have any model to assess comparison as a competency in geography classes. In addition, little is known about how textbook tasks promote comparison competency. Therefore, in this study, a competency model for comparison in geography education was developed. It consists of four dimensions of comparison competency, which relate either to the mastering of comparison processes or to content-related elements of comparisons. Then, via a qualitative content analysis and descriptive statistics, the competency model was used to assess which dimensions of comparison competency were featured in 981 tasks from 20 German, English and French textbooks. Results showed that comparison tasks largely failed to promote autonomous and argumentative comparison process planning. However, numerous tasks performed better on the content-related aspects of comparison. Thus, the competency model presented in this study is a valuable tool to assess and enhance comparison competency in geography education and to promote students’ autonomous geographical thinking.

1. Introduction

Social sciences and geography use comparison as one of the most fruitful methods to gain knowledge. Comparison is not only a “central feature of scientific activity” [1] (p. 822), but, as a fundamental cognitive operation, it allows us to sort units and/or explain similarities or differences according to variables. It is central to children’s learning processes and, hence, can be used for educational purposes, for example in the form of a task in geography textbooks. Comparison is cited as a task in the geography curricula of various countries. In French and English curricula, comparisons are often based around case studies [2] (p. 14): for example, different countries’ vulnerabilities to risks are assessed with regard to the levels of development [3] (p. 102). In Germany, the educational standards insist on command verbs, including “to compare”, as specific actions in tasks that pupils must master [4] (p. 32).
Implementing comparisons in geography classes can contribute not only to the development of methodological skills, but also to the enhancement of content-related knowledge. Indeed, to compare in a meaningful way entails intense reflection on the different comparison units, comparison variables, and comparison objectives [5] (p. 685). Comparisons also allow us to make generalisations and contrast cases with controlled variables. Therefore, fostering comparison competency is crucial to enhance students’ autonomous, reflected, procedural and disciplinary knowledge. It is also a way to promote knowledge that allows them to participate and build their opinion in significant debates [6] (p. 75) such as sustainability issues or spatial inequalities.
However, despite the prevalence of comparison in curricula and textbooks, research has not addressed the analysis of comparison as a subject-specific method or as a competency to be acquired by students. We do not currently have any competency model to assess comparison skills. Additionally, little is known about whether geography textbooks implement comparison tasks in a way that enhances comparison competency and its acquisition by students.
Therefore, in this study we propose a theoretical competency model for comparison in geography education. We used this model to analyse 20 textbooks from three countries: Germany (North Rhine-Westphalia and Berlin-Brandenburg), England and France. We led a qualitative content analysis as well as a quantitative analysis to characterise different types of tasks present in textbooks and evaluate them in relation to the competencies that these tasks should enhance. Our research questions were:
-
How can we model comparison competency?
-
To what extent do textbooks enable the development of comparison competencies: how many tasks address comparison and what competency levels are they supposed to enhance?
-
Can we identify differences between countries with regard to the promotion of comparison competency in textbook tasks?
The first section of this article presents our theoretical background, and a competency model for developing comparison competencies. Then, we present our methods to analyse comparison tasks using both qualitative and quantitative analysis. In the third section, our empirical results reveal relatively insufficient competency-building in comparison tasks. Finally, we discuss our findings and state the potential of our competency model to help design comparison tasks for geography classes.

2. Theoretical Background: A Competency Model of Comparison in Geography Education

2.1. Comparison as a Competency

Comparison is the cognitive act of juxtaposing two or more units according to one or more variables to identify similarities and/or differences [7] (p. 6). For example, in the following task from a German textbook for 12 to 13-year-old students: “Compare three megacities of your choice at two different times” [8] (p. 93, own translation), students must reflect on the comparison units (here, megacities), which may be in different countries or continents. They also have to reflect on the variables they will use to compare units, such as spatial expanse or population, and the relevant dates. Comparison tasks are frequent in textbooks: comparing is fundamental for human reasoning and enables learning [9] (p. 103). For example, comparison produces changes in mental representations and knowledge by making it possible to classify elements and/or create categorisation systems [6] (p. 12). Comparison also allows general rules to be abstracted from concrete cases [10] (p. 31), [11] (p. 45) and then applied to new cases or situations [12] (p. 211). The use of comparison is thus a “powerful tool” [9] (p. 105) for learners and educators.
In geography science, comparison entails a content-related dimension since it allows reflection on disciplinary concepts or the production of knowledge about cases. In inductive or nomothetic approaches, comparison facilitates the development of models and the formulation of general laws [5] (p. 691), [13] (p. 87). It can also help to test developed models, identify deviant case studies [5] (p. 692), [13] (p. 116), and characterise processes using a diachronic approach [14] (p. 116), such as in our example, where students can determine the pace of growth in different megacities or differentiate cities with different development statuses. Comparison is also used, in a more interpretive or idiographic approach, to highlight the singularity of the examples studied [15] (p. 20). Comparison thus contributes to knowledge in both the social and the natural sciences.
Comparison also entails a methodological or procedural dimension. Scientific comparison is different from the intuitive comparison common in everyday life in that it involves systematic, controlled methods [1] (p. 822). These methods are also the subject of discussion and debate within the scientific community, such as the four-step model developed by Hilker [16] and Bereday [17], which includes description, interpretation, juxtaposition and comparison. While some authors promote the use of fairly similar units—”Most Similar Systems Design”—in order to be able to control the observed variables [5] (p. 687), others on the contrary, favour comparing very different units in order to better understand the similarities around a given variable, despite differences on other variables—”Most Different Systems Design” [18] (p. 390), [19] (p. 34), [15] (p. 20). Different approaches to comparison are also a subject of debate in geography. An example of this are the discussions on the harmonisation of the units and variables used to compare urban systems across the world and test the validity of Zipf’s rank-size rule for city distribution [20] across different continents [21]. Therefore, comparison is not only pertinent for expanding our knowledge through its results, but comparison, as a process, carried out in a conscious and reflective way, also contributes to cognition itself [22] (p. 178).
Being a fundamental act of human reasoning and a scientific method, comparison is not absent from geography curricula. In Germany, France and England, students are frequently required to perform comparison tasks. Firstly, in Germany, where textbook and classroom tasks must correspond to different requirements and levels [4] (p. 32) [23,24], “to compare” is one of the main tasks students are required to perform in order to be able to transfer and/or analyse data. Secondly, in France, comparison is systematically used in geography classes to compare case studies with other scalar levels and derive explanations or general rules to be learned, via an inductive approach [25] (p. 4), [26] (p. 66). Finally, in England, understanding “the interrelations between geographical phenomena at different scales and in different contexts” is presented as equivalent to thinking “like a geographer” and is a requirement for passing the GCSE exams at the end of secondary school [27] (p. 3). Thus, comparison, in geography curricula and geography science alike, is considered both as a method and a way to gain knowledge and is hence present in textbooks.
However, it can be a very difficult task to perform and may be too demanding for some students without training or guidance. Given this, Wilcke and Budke [28] developed a six-step model to describe comparison as an argumentative and reflective process in geography education (see Figure 1).
To perform the task presented above in an informed manner, students would consequently have to weigh the variables and decide which problem or question they want to solve with the comparison. This task can be very demanding for students as it requires intense reflection on the choices made at each step of the comparison process. Although comparing is fundamental for human reasoning, it is neither a ready-to-use scientific tool, nor an easy method in geography education: our example shows that comparison has to be learned and practiced.
Given the above, we can characterise comparison as a competency. In the social sciences, competency is defined as a capacity or a disposition [29] (p. 73). A competency can be learned [30] (p. 8) and differs from the notion of performance or achievement, although competency is necessary to both [29] (p. 73). Education sciences have also defined educational competencies as “context-specific cognitive dispositions that are acquired by learning and needed to successfully cope with certain situations or tasks in specific domains” [30] (p. 9). Using comparison in geography education is not easy: it means being able to implement a systematic and reflective method, oriented towards geographical knowledge acquisition. It is a professional competency for geographers and can be learned in geography education; comparison is thus an educational competency for geography students, as is the case in other fields such as language and literature education [22] (p. 143).
The concept of competency has gained interest in the context of productivity- or performance-oriented policies [29] (p. 70), [30] (p. 3). In Germany, France and England, although skills and competencies are a new focus in curricula, which have been more output-oriented than knowledge- or input-oriented since the 1990s [31,32,33], there are no specific instructions in these curricula on how to approach comparison in geography classes as a competency. Furthermore, little has been written in geography education on comparison as a competency. Moreover, as a subject-specific method that can help students not only form generalisations and use and reflect on concepts, but also gain knowledge about their own knowledge, [6] (p. 74), comparison is one of the necessary tools for the acquisition of powerful knowledge in schools [6,34]. In addition, fostering comparison as a competency in geography education can help develop the students’ geographical skills, their maturity and autonomy towards geography as a science. Therefore, in the following section, we propose a competency model for comparison in geography education.

2.2. A Competency Model for Comparison in Geography Education

Here we propose a competency model for comparison. While there are various different proposals for competency models in geography education, none are specific to comparison. Competency models in geography education are one of the tools that can be used to foster and assess competencies, since they help to measure competency acquisition [32] (p. 11).
Existing models for comparison in other fields are not sufficient for assessing comparison in geography education. For example, Wellnitz and Mayer [35] (p. 328) studied comparison in biology education. However, their definition of comparison appears limited. Firstly, in their approach, comparison units are not subject to reflection, leaving variables as the only elements to be selected and justified in order to classify the different units. Secondly, the sole objective of comparison in their model is to classify or differentiate biological systems. Yet, as we have seen previously, in geography comparing has objectives that go beyond simple classification and ranking, for example nomothetic approaches.
Comparison can also serve different goals in geography education [36]. Here we propose four general objectives for comparison in geography education: to juxtapose examples in order to build models or rules or to better understand each case study inductively; to apply or test models in a deductive approach; to rank examples and establish typologies; and to acknowledge or identify processes diachronically, [36] (p. 4). As these objectives are not accounted for in the Wellnitz and Mayer model [35] (p. 328), another competency model, better adapted to geography education must be found.
Finally, it seems fundamental not to only consider the links between comparison and geographical knowledge. Developing comparison competency means developing content-related knowledge, but also procedural knowledge and knowledge about one’s own knowledge via the “epistemic tools” used in the discipline [6] (p. 75). Therefore, in the following, we propose a competency model for comparison in geography education (see Table 1).
This competency model is divided into four levels (see Table 1) and postulates increasing competency between levels 1 and 4, level 1 being incomplete competency and level 4 being full achieved competency. In this model we assume that a level includes and goes beyond the competency achieved in a lower level. We also assume two different kinds of competency increases are possible: competency increases through the addition of new elements in a continuous progression, but also through leaps such as cognitive and conceptual change, via a global conceptual reorganisation of knowledge [37] (p. 187). We relate increasing competency with increasing complexity following Kauertz et al. [38] (p. 142–143), meaning, complexity is based on quantitative criteria, i.e., the number of elements involved, but also on qualitative criteria, such as the ability to include and consider interrelations or concepts.
In addition, our competency model is divided into four independent dimensions (see Table 1) which posit comparison as an educational and scientific tool in geography. The first two dimensions relate to competencies involving processes associated with comparison and argumentation. The first dimension (planning and implementation of comparison processes) refers to comparison as a process marked by different steps, as identified by Wilcke and Budke [28] (p. 8, see Figure 1). At level 4, students are autonomously able to compare, whereas, in the lower levels (1 to 3), teachers or the teaching material provide them with guidance through one or more steps of the comparison process. Students gaining in competency should gradually be able to select the constituent elements of the comparison autonomously. These elements include the units, variables and overall question, but also the material required to study and carry out the comparison.
The second dimension (reflection and argumentative justification of comparative processes) relates to the argumentation and reflection competencies required for the comparison process [28] (p. 8). Here argumentation is used to justify and explain the results [39] (p. 219), [40] (p. 11). Argumentation serves to justify the choices made in the comparison process and to reflect on the process itself, but moreover, it is an essential tool for developing other geographical competencies [40] (pp. 15–17) and contributes to students’ scientific literacy [6] (p. 75), [41] via the comparison process.
The last two dimensions relate to geographical content-related elements of comparison. The third dimension (interrelation of geographical information) explores the specificities of knowledge and content related to comparison processes, such as reflection on variables and concepts, and the capacity to generalise and consider comparison contexts. The fourth dimension (achievement of comparison goals) concerns the content-related goals of comparison [36]. The first level englobes only juxtaposition and ranking, as in the following task: “Compare layered and shield volcanoes according to the following aspects: type of eruption, lava properties, shape and extent” [42] (p. 161, own translation). Here students have everything provided: material, units and variables. The goal of the comparison is only to juxtapose types of volcanoes, but not to reflect on the typology or to even establish types. In contrast, the second level is less simple, with the application of models and temporal comparisons. The third level involves reflecting on the models and being able to criticise them, as well as the ability to reflect on processes. Finally, the fourth level is similar to common scientific geographical practice: it involves using nomothetic and idiographic approaches within the comparison, as well as establishing typologies.
We applied this competency model for comparison in geography education to analyse tasks in textbooks and tried to establish the extent to which they are suitable for teaching comparative competency.

3. Methodology

To study which competencies comparison tasks effectively promote, we propose in this article an analysis of 20 textbooks from France, England, and Germany. We chose to analyse these three countries to try and identify possible national differences between their textbooks’ approaches. An international analysis also enabled us to reflect on research biases due to the researchers’ proximity to a known or to their own culture and to point out “possible directions that could be followed” [43] (p. 158).
Some similarities and differences in school systems and curricula should be mentioned: in the three countries, geography is a compulsory subject in secondary school: for students up to 14 years of age in England, and for students up to 18 years of age in Germany and France. As Germany does not teach geography as an independent subject at primary school, we did not include primary education. Therefore, we analysed textbooks intended for students of secondary schools (“Gymnasium” in Germany, “collège” and “lycée” in France, from 10 to 16+ years of age). School systems in the three countries also offer geography as a speciality subject to be chosen for national examinations. In France, geography and history are always taught by the same teacher and are considered as a “couple” [44] (pp. 89–90). We only analysed the geography sections of French textbooks, which included history and geography in the same book for the youngest students.
Textbooks are useful to study in order to analyse the extent to which comparison competencies are applied and taught, and which comparison competencies are enhanced (see Table 1). Indeed, textbooks, as educational media, are crucial to the preparation of teaching sequences and to actual teaching practices [45] (p. 9). They also reflect curricular orientations [46] (p. 345). As stated before (see Part I), comparison appears in the curricula from the three countries either as a task students are to perform [4] (p. 32) or as a way to infer generalisations from case studies via induction, or to demonstrate interrelations [25] (p. 4), [27] (p. 3). Analysing how textbooks actually enhance comparison competencies through tasks provides an interesting indication of how they may contribute to curricula in practice. Moreover, textbooks are an interesting source since researchers differentiate between the “real” curriculum, which is experienced by students [47] (p. 133), and the intended curriculum [48] (p. 5), reflected in textbooks [49] (p. 132), which are also part of the “potentially implemented” curriculum [48] (p. 5).
The textbooks were selected with the following criteria. First, since we wanted to get a broad impression of the different textbooks in the respective countries, we chose textbooks from different federal states if it was necessary. In Germany, where educational systems depend on the federal states, we selected the two states of North-Rhine Westphalia and Berlin-Brandenburg which are very different concerning the curricula in geography. In France and England, curricula are national: consequently, the textbooks were not from different regions. A second important criterion was the variety of publishers, whose titles had to be commonly used in schools. In Germany, the main publishers in geography education are Klett and Westermann, the companies that published the two series we chose. For France and England, we also chose famous educational publishers: Hachette and Nathan for France, Pearson and Hodder Education for England. Finally, we also selected textbooks corresponding to different theoretical approaches when it was possible: therefore, for England, some of the selected books were older and included “enquiry-based learning” approaches in their structure. Enquiry-based learning approaches were implemented in England in the curricula from the 2000s and were supposed to help students develop scientific and research strategies in the classroom through a constructivist approach [50] (p. 6), [51] (p. 106). Analysing textbooks using these approaches could help us identify a possible English exception when it comes to comparison, which may be treated explicitly as a scientific method to be acquired in the classroom. As a consequence of these choices, our final textbook selection included textbooks from five series: for Germany, we chose Terra, 1st ed. [42,52,53,54] and Seydlitz Geografie [8,55,56,57]; for France, textbooks were the series from Histoire-géographie-EMC [58,59,60,61] and Géographie [62,63] completed with the textbook Géographie Term L-ES-S [64]. For England, we chose the Think through Geography series [65,66,67] and supplemented them with two other books: AQA GCSE (9-1)–Geography [68] and AQA A-Level–Geography, 4th ed. [69].
Tasks are paratextual elements that engage students in a specific action [70] (p. 1325). Tasks, as questions, investigations, activities, are a central exercise in the learning process and a major tool to help students gain competency [45] (p. 10), [71] (p. 24). Tasks can be divided into different categories depending on their objectives: they can aim to help students to memorise, understand, apply, analyse, create or evaluate [23,24]. They also “influence learners by directing their attention to particular aspects of content and by specifying ways of processing information” [72] (p. 161).
The tasks from our sample were clearly identifiable, being separated from the main text of the lesson by means of a letter or a number, and sometimes consisting of one or more subtasks. They were located in diverse parts of textbooks, some of them being explicitly intended to train specific skills at the end of chapters in the revision or methodology sections. Many tasks were also associated with material to be studied by students through the task’s formulation, which was sometimes framed as a question or using a command verb or imperative (“Operatoren”, “verbes de consigne”). Sets of tasks often followed a pattern in which students had first to select or reproduce information, then to apply or explain it, and finally to assess it. This corresponds to the three hierarchical steps identified in taxonomies of educational objectives [23,24] and widely promoted in the German school system [4] (pp. 31–32).
We defined comparison tasks as follows: a comparison task, consisting of one or more subtasks, engages students in the production or reception of a comparison while juxtaposing comparison units according to one or more variables. The selection of tasks in textbooks from the three countries needed to take into account linguistic specificities: although in Germany and England the words “task” and “Aufgabe” are equivalent, in France tasks are called “questions” or “exercices”. The term “tâche complexe” is rare and mostly used in language education. After carefully adapting to these linguistic differences, we selected and counted all comparison tasks from a total of 10,681 different tasks. Our sample consisted of 981 tasks (9.18% of the overall tasks).
Along with identifying comparison tasks and noting one independent variable (country), the variables were chosen so that it would be possible to classify the tasks in the different dimensions (see Table 1), thus using a deductively formulated category system [73] (p. 12). We examined in each task’s formulation if students were asked to select units, variables, material or overall question (see variables used to analyse the 1st dimension of comparison competency, Table 2). Then, we focused on argumentation: if the task involved argumentation, we carefully observed to what purpose argumentation was required: to justify comparison processes or the results (variables used to analyse the 2nd dimension of comparison competency, see Table 2). We also counted the number of variables used in the tasks, noted when students were asked to weigh them and/or to reflect on the overarching concepts (variables used to analyse the 3rd dimension of comparison competency, see Table 2). Finally, all tasks were classified according to the various objectives they were intended to achieve (variables used to analyse the 4th dimension of comparison competency, see Table 2). To ensure the reliability of the classification, tasks were successively classified according to the different variables by two raters, and we also used inter-coder agreement to assess the reproducibility of our classification for the 4th dimension, obtaining a final Kappa coefficient of 0.66, which can be considered as substantial [74] (p. 165).
The classification was made taking into account the formulation of the task but also the presence and type of material provided to students on the textbook pages. As an anchor example [73] (p. 95), we will now explain our classification of the following task, using the different variables: “Compare the selected countries according to their ecological footprint” [54] (p. 257, own translation). This single sub-task came from a German textbook. It was not related to any specific problem to solve. Material was provided to the pupils (a graph), as well as one comparison variable (ecological footprint) and comparison units (41 countries). The task was formulated using the verb “to compare” and was a highly closed, lower-order task (see Table 1) that left little autonomy to pupils and only required them to reproduce information. While classifying the tasks, we formulated the encoding rule [73] (p. 95) that we only took into account what was explicitly asked from students. This task did not explicitly ask pupils to argue or reflect on the comparison process, therefore, in our competency model, it only achieved level 1 in each of the four dimensions since neither explanation nor argumentation were expected.
The different dimensions (see Table 1) were then separately analysed in order to determine where the textbook tasks fall within the competency levels. We carried out a descriptive statistical analysis of frequencies which were cross-tabulated by country for comparison. In addition to our quantitative analysis, we examined different examples from tasks, studied in the textbook context, in order to better understand and interpret our results. These examples were carefully selected, qualitatively, as the most representative within the quantitative category identified in the statistical analysis.
In the following section, we present the results obtained from these analyses.

4. Results

Our sample of 981 comparison tasks represented 9.18% of all tasks, which was a significant proportion, with differences between countries (Germany: 9.72%, France: 6.25%, England: 11.72%). In the first two sections, we examine which competencies the tasks achieved in the first two dimensions (see Table 1), and then we analyse in the third section how they fitted into the content-related dimensions (dimensions 3 and 4, see Table 1).

4.1. How Did the Tasks Enhance Competencies for Planning and Implementing the Comparison Processes in the Textbooks?

We first analysed dimension 1 (“planning and implementation of comparison processes“, see Table 1) related to the degree of autonomy and planning agency required from pupils in the analysed textbooks (see Table 3).
Very few tasks enhanced student’s autonomy in the comparison process (see Table 3). No task encouraged students to formulate their own problem to solve, and very little autonomy was left to them in the selection of the other possible elements of comparison, such as the selection of material (only 5.6%), comparison variables (12.3%) or comparison units (15.1%). Only a few tasks achieved level 3, and no task in any of the analysed textbooks achieved level 4. Nearly three quarters of tasks only achieved level 1 in this dimension and were designed similarly to this example: “Compare migration to and from Europe (documents 1 and 4). What do you notice?” [62] (p. 179, own translation). In this example, the comparison units and variables used to describe the topic (here migrant motivations and development levels), are provided in the material. Such tasks are oriented towards knowledge acquisition or media literacy through document analysis: students are to provide expected answers, and tasks are only used to verify this knowledge acquisition. Reflecting on the specific process of comparison is not the main focus of the textbook authors, who rather seem to concentrate on geographical content.
Results also showed differences between the countries, as we will now explain in more detail (see Table 3 and Table 4).
Firstly, our figures (see Table 3 and Table 4) revealed that in English textbooks students had more autonomy in the comparison process, which was somewhat less controlled by the task. Material and comparison units were less frequently provided to students than in the other two countries, which made the tasks less constraining than in Germany or France, and may constitute an English exception due to the “enquiry-based learning” approach. For example, the following task, taken from an English textbook for students of 12 years of age, provided room for autonomy in the selection of material, units and variables: “In groups, research different types of alternative energy—the advantages and disadvantages of each. As a class decide which you think will be the most important in the future. You might like to produce a poster, leaflet or Powerpoint presentation to show your findings”. [66] (p. 43). Students were encouraged to look for material themselves, to investigate and to be able to reflect on the documents they found in a real research situation. This is part of what Margaret Roberts has called “creating a need to know” [75], p. 44) as a strategy to stimulate and engage students in a research process. Appealing to the students’ experiences (here, also, to their opinion) is also a strategy to interest them in enquiry-based learning approaches [76] (p. 204), [77] (p. 365). Finally, asking them to “decide”, “as a class”, seems to appeal to them to make a democratic political decision and can contribute to their political education [78] (p. 11), [79] (pp. 803–804). This interesting task was associated with a chapter about energy distribution and energy as a resource and could raise students’ awareness about sustainability issues. However, it was located in the last part of the “enquiry process” of the teaching unit, in the “homework” part of the double-page and was not essential to the geographical skills- or knowledge-acquisition involved in the chapter. Some tasks in textbooks are indeed not designed to demand much time from students, so that the teacher can get to the point of the lesson quite rapidly, in response to perceived time constraints in geography education [80] (p. 6–7).
Secondly, in French textbooks, students were rarely able to select variables on their own, whereas in Germany, they were less autonomous in the selection of units (see Table 3). The results revealed that tasks were significantly closed in both countries. However, in France, case studies are used at the start of chapters to introduce important thematic definitions. The topics of the case study are subsequently examined at different (widening) scales, in order to identify differences or commonalities across places, or countries inductively. For example, in the textbook for the “classe de Quatrième” (13–14 years of age), the following task was found: “Show that the story of Koudous Seihon’s migration resembles many others” [60] (p. 275, own translation). Here the students had to visit a website and describe an example of a migrant journey across the Mediterranean Sea. The goal is not to reflect or highlight the diversity of possible ways to characterise migration, or to define its variations: the goal is to identify common and general features characterising international migrations through one example, rather than to differentiate variables or reflect on the meanings of the concepts. Comparison in French textbooks is, then, often used to generalize and introduce or reproduce already available knowledge [81] (p. 178), but not to gain autonomy in the research methods or processes. In Germany, there is even less autonomy left to students in the selection of units: in this country the objective is more to reflect on variables. Mastering the comparison process while weighting variables can be highly useful [28] (p. 8). However, it can also constitute a hinderance to understanding the diversity of cases and can lead to the use of more stereotypical and unnuanced examples [82] (p. 75), [83] (pp. 103–104), [46] (p. 346).
Our overall results for this first dimension generally revealed that there was very little room for autonomy left to students in comparison tasks across the three countries. Results also showed differences between countries, due to different curricular or theoretical approaches.

4.2. How Did Textbook Tasks Enhance Competencies Related to Argumentation and Reflection?

In a second step, we analysed whether or not tasks encouraged students to use argumentation to justify the results of comparison and, if relevant, how argumentation was used to justify the procedure in the comparison process (see Table 5), as required in the model developed by Wilcke and Budke [28] (see Figure 1).
Results (see Table 5) revealed the small extent to which students are required to use argumentation in comparison tasks, with many tasks that did not fall into levels one to four of our model. Only 17 tasks achieved level 2, asking students to justify the selection of one comparison element. Only one task in our sample asked students to reflect on and justify more than two elements in the comparison process and therefore achieved level 3. No task achieved level 4.
These elements showed, firstly, how low the proportion of tasks explicitly asking students to argue is, and, in the small proportion of tasks requiring argumentation, how content-oriented they are designed to be. Argumentation, when it was explicitly called for, was mainly aimed at justifying elements relating to content or to the results of comparison. This is particularly the case in French textbooks, where argumentation is very much linked to reasoning [84] (p. 3). Colin et al. [84] (p. 7) have revealed the limited extent to which reasoning in school geography actually corresponds to geographical (spatial) reasoning. Indeed, many French tasks asked students to “show” or “prove” a predetermined result of the argumentation, as in the following task: “Why do we talk about American power but only about the ascension of Brazil? (doc 1 and 2)” [64] (p. 183, own translation). This task comes from a chapter comparing the United States of America and Brazil in terms of economic development and political power, introducing the concepts of “emerging” (Brazil) and “developed” countries as well as “global power” (USA). In the task, students must discuss the result of the comparison, but they are not asked to reflect on the process leading to it or reflect on the categories or concepts. Conversely, German textbooks tend to ask students to prove or show predetermined results less frequently, as a possible consequence of a refusal to impose knowledge or ideology on to students—as was once common practice in geography classes of the former German Democratic Republic [85].
When analysing the performance of tasks in this dimension, therefore, we found that argumentation was not thought of as a means to acquire methods or knowledge, but rather used to explain obtained results. The tasks in the textbooks studied were not considered by their authors as an opportunity to teach students how to plan and carry out a comparison as part of an autonomous research process, in which they would learn how to use argumentation to support their choices.
The results for the second dimension of comparison competency showed how tasks are mainly closed tasks failing to engage students into acquiring methodological skills and reflective geographical thinking.

4.3. How Did the Tasks Enhance Content-Related Competencies for Recognising and Interrelating Geographical Information and Competencies to Help Achieve Comparison Goals?

Finally, we analysed the distribution of tasks in the two content-related dimensions: the third and fourth dimensions (interrelation of geographical information, and achievement of comparison goals).
Our analysis of the third dimension showed that 92.6% of tasks required the management of only one variable (88.1% in German books, 97.1% in French books, 95% in English books). Thus, an overwhelming majority of tasks only achieved level 1 in the third dimension (see Table 1). As a consequence, only 6.2% of tasks (10.2% from German books, 1.9% of French books, 4.1% of English books) achieved level 2, with the involvement of two variables. In only one task did students have to weigh variables, thus attaining level 3. Finally, very few tasks (1.1%) asked students to use comparison contexts or to reflect on concepts.
In the following task from a French textbook, students were supposed to compare different indicators and weigh their relevance to measure different aspects of development: “What are the components of the Social Progress Index? (…) Compare this document with the Planisphere: do developed countries necessarily rank highest in the Social Progress Index? What differences do you notice? Using your answers, show that the Social Progress Index is a way to add nuance to the measures of development and economic performance of states”. [62] (p. 159, own translation). This task used comparison as a tool to understand the interpretive value of different indicators used to compare states and balance the classical financial indicators with other dimensions of welfare and development such as social factors. Interestingly, it helped students to understand different aspects of the concept of development. Although students could only select units (and no other comparison element), which caused this task to only achieve level 2 in the first dimension, it achieved level 4 in the third dimension related to the content of comparison.
The fact that only very few tasks achieved a higher level in this dimension also reveals that textbook authors do not appear to place importance on the construction of geographical content and knowledge through interrelating and weighting variables and elements. Tasks in studied textbooks do not place students in a research situation, even in studied textbooks using enquiry-based learning approaches, which did not differ on this point from other approaches in our sample. It should also be noted that deeper reflection on comparison processes seems difficult given the limitations of the double-page spread [86] (p. 236), which is the common layout adopted in our textbooks across all three countries.
We also analysed in Table 6 content-related comparison competencies through the objectives of comparison tasks (fourth dimension, see Table 1).
Our results in this fourth dimension (see Table 6) showed a polarised distribution of tasks between either easy and lower-order objectives (juxtaposing and ranking tasks only achieving level 1), and difficult and higher-order objectives (building rules/models or a typology and better understanding examples, achieving level 4). This confirmed the difference initially identified (see Part 1) between ways to define comparison: it is firstly an essential reasoning tool applied to human experience and is used as such in many tasks to make students aware of differences or commonalities. It can indeed be useful to pique students’ interest via simple tasks [76] (p. 192), but some have criticised the use of everyday experience as a limiting approach: to them, education should take students beyond their personal experience [34]. Secondly, comparison is also used, although rather infrequently, to replicate or reproduce scientific methods and processes, with the aim of implementing inductive, nomothetic or idiographic approaches. This significant result shows how comparison is envisaged by textbook authors as a tool to gain knowledge (as it is also characterised in curricula), and how it is in fact used via case studies or examples to help students form generalisations or build rules.
Once again, there were differences between countries: in German textbooks the distribution was less polarised than in English or French ones, with more variation in the functions of tasks. In Germany, models such as city models were also more explicitly used in textbooks. German textbooks also included more demanding tasks (level 4) than tasks achieving any other level, whereas French and English textbooks included more lower-order tasks (only achieving level 1).
The overall results in dimensions 3 and 4 of the competency model (see Table 1) showed that tasks performed better in the fourth dimension related to the content-related goals of the comparison. Nevertheless, the potential offered by the comparative method is not reached by the tasks, which do not propose reflection on contexts or the evaluation of variables (third dimension).

5. Discussion

In this study, our aim was to develop a competency model for comparison tasks in geography class. We used this model for the further analysis of comparison tasks in a corpus of textbooks from Germany, France and England, in order to test its relevance for such an analysis.
Our competency model proposes four dimensions of comparison competency. Dimensions of comparative competency include not only comparison methods, but also argumentation by justifying choices made during the comparison, allowing the development of scientific literacy [41]; reflection on the variables and the context of the comparison; and also the scientific objectives related to the content of the comparison. This comprehensive approach to comparative competency is in line with calls for the promotion of “powerful knowledge” in geography education, including methodological knowledge, geographical knowledge, knowledge that goes beyond the individual experience of students, and knowledge about one’s own knowledge [34].
Analysis of the task distribution in the textbooks studied using our model demonstrated firstly, the low proportion of comparison tasks that could achieve a satisfactory level in the first three dimensions. The low level of autonomy left to students in the comparison process, as seen in the first dimension, highlighted the very closed and reproductive nature of the comparison tasks, which is in line with other research findings on the nature of tasks in geography textbooks in the three countries [87] (p. 261), [46,84]. These closed tasks can be of interest if the objective is to have the students learn science in the sense of an “existing, consensually-agreed and well-established old knowledge” [81] (p. 178). Additionally, comparison tasks that are strictly reproductive can sometimes be of interest to serve the purpose of raising students’ interest [75] (p. 44). However, since few tasks allowed students to select the elements of the comparison, few tasks required them to justify the selection or reflect on the comparison process, as analysed in the second dimension, and rarely entailed argumentation, as previous research also found. This is regrettable, since alone reproductive tasks do not enable students to interrelate information, nor to solve complex operations, nor to reflect and argue on one’s own knowledge. Finally, the results in the third dimension showed that textbook authors do not consider the possibility of using comparison to interrelate geographical information and reflect on variables or on the context, which is also in line with the results obtained in the previous dimensions. The first three dimensions (planning and implementation of comparison processes; reflection and argumentative justification of comparison processes; interrelation of geographical information) are then highly interrelated and show concordant results. These results reveal the very limited extent to which textbook authors consider comparison as a competency and as a process that is important for students to manage autonomously and exercise, even though curricula emphasise its relevance and despite the fact that this autonomy would be necessary in geography education designed to facilitate the acquisition of “powerful knowledge” [34] (p. 75).
In the fourth dimension (achievement of comparison objectives) however, results showed a polarised distribution and a relatively higher ratio of achievement of level 4 among the tasks. These elements confirm findings about the objectives of comparison tasks [36] and the textbooks’ focus on lower-order and closed tasks [46,84]. Textbook authors oriented tasks more towards content-related objectives, than methodological or competency-building functions in regard to comparison, which is in line with previous findings [88], [46] (p. 352). It also implies that textbook authors have particularly high expectations regarding content-related competencies. It also highlights how textbook authors and official curricula do approach comparison as a “powerful tool”, [9] (p. 105) to gain knowledge or consider how it can be used to formulate geographical concepts or better understand cases, as it is in geography science [89,90]. However, this content-oriented perception of comparative tasks is incomplete and restrictive without proper reflection on the methods, interrelations and argumentations that should be used in comparison. It leads to the fact that students learn few approaches to critically question the results of comparisons they encounter in everyday life, e.g., on the Internet or in the newspaper.
The overall poor results correspond to previous findings which showed how textbooks often focus, in geography and science education, on the end product of science, seen as a truth, rather than on a view of science as a construct up for debate [44,70,91]. If tasks are “mediating tools for the culture of science and science learning in school” [70] (p. 1332), then comparison tasks should leave more autonomy in the determination of the comparison units, elements and their weighting as well as include argumentation to support and reflect on the comparison process. Indeed, argumentation contributes to the development of other geographical competencies and to understanding [40] (pp. 15–17), [92] (p. 5). It also allows students to build knowledge on the cognitive processes involved in geography and on their own competencies [6]. Reproductive tasks can sometimes be of use [81] (p. 178) and purposely included, but, this study also showed that there is room for manoeuvre in the designing of more demanding and comprehensive comparison tasks in geography textbooks and material.
Our international analysis mostly showed commonalities between the countries, although national differences were also visible. German textbooks tended to reflect more on variables and models in comparison to the textbooks from the other countries. English textbooks implementing enquiry-based learning approaches, left more autonomy in comparison processes, whereas, French textbooks seemed to leave very little agency to students in the different possible answers to comparison tasks. These differences highlight the potential of enquiry-based learning approaches, even though the implementation of these was in fact limited [76] (p. 103). They also confirm the existence of different textbook and subject cultures in geography education: French textbooks, oriented towards content knowledge and reasoning are influenced by encyclopaedism, whereas the analysed English books, leaving more agency to students, show the influence of individualism in the English school system [43] (p. 158) and German textbooks promote propaedeutic knowledge via the emphasis on content-related competencies [93].
Our results show that our model can be of use, firstly, to evaluate the possibilities offered by textbook tasks in different countries for the acquisition of comparison competency. Classifying the tasks proposed in textbooks in this model can be of interest in order to choose interesting tasks to use in the classroom or to adapt them to the desired level. It can also be used by textbook authors or teachers to design comparison tasks and either monitor students’ progression or assess and work on different levels within the class. It is thus possible to use the model in different teaching strategies, such as internal differentiation or to control the learning curve. More broadly, implementing the competency model for comparison can promote learning and developing scientific methods in geography education. This can also contribute to developing students’ content and procedural knowledge and critical thinking on geographical and societal issues such as sustainable development or spatial conflicts. Finally, further studies could test the implementation of this model and its predictive validity in real classroom situations. Research could also test the competency model in different educational systems and address the necessary local adaptations or improvements.

Author Contributions

Conceptualization, M.S. and A.B.; Data curation, M.S.; Formal analysis, M.S.; Funding acquisition, A.B.; Investigation, M.S.; Methodology, M.S. and A.B.; Project administration, A.B.; Supervision, A.B.; Visualization, M.S.; Writing—original draft, M.S.; Writing—review & editing, M.S. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)–Project Number 57444011–SFB 806.

Acknowledgments

The authors would like to thank Frank Schäbitz for his valuable feedback in different stages of the research and for the collaboration in the overarching project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Piovani, J.I.; Krawczyk, N. Comparative studies: Historical, Epistemological and Methodological Notes. Educ. Real. 2017, 42, 821–840. [Google Scholar] [CrossRef] [Green Version]
  2. Ministère de l’Education Nationale. Programme d’histoire-géographie de Seconde générale et technologique: Arrêté du 17-1-2019. Bulletin Officiel de l’Education Nationale Spécial; Ministère de l’Education Nationale: Paris, France, 2019.
  3. Department for Education and Skills; Qualifications and Curriculum Authority. The National Curriculum. Handbook for Secondary Teachers in England; Department for Education and Skills: London, UK, 2004.
  4. Deutsche Gesellschaft für Geographie (DGfG). Bildungsstandards im Fach Geographie Für den Mittleren Schulabschluss Mit Aufgabenbeispielen; Selbstverlag DGfG: Bonn, Germany, 2017. [Google Scholar]
  5. Lijphart, A. Comparative Politics and the Comparative Method. Am. Polit. Sci. Rev. 1971, 65, 682–693. [Google Scholar] [CrossRef]
  6. Maude, A. What Might Powerful Geographical Knowledge Look Like? Geography 2016, 101, 70–76. [Google Scholar] [CrossRef]
  7. Namy, L.L.; Gentner, D. Making a Silk Purse out of Two Sow’s Ears: Young Children’s Use of Comparison in Category Learning. J. Exp. Psychol. Gen. 2002, 131, 5–15. [Google Scholar] [CrossRef]
  8. Hellmann, L.; Hennig, J.; Morgeneyer, F.; Schäfer, T.W.; Schwabe, T. Seydlitz Geografie 7/8; Berlin/Brandenburg; Schroedel: Braunschweig, Germany, 2016. [Google Scholar]
  9. Goldstone, R.L.; Day, S.; Son, J.Y. Comparison. In Towards a Theory of Thinking: Building Blocks for a Conceptual Framework; Glatzeder, B., Goel, V., Müller, A., Eds.; On Thinking; Springer: Berlin/Heidelberg, Germany, 2010; pp. 103–121. [Google Scholar] [CrossRef]
  10. Gick, M.L.; Holyoak, K.J. Schema Induction and Analogical Transfer. Cogn. Psychol. 1983, 15, 1–38. [Google Scholar] [CrossRef] [Green Version]
  11. Gentner, D.; Markman, A.B. Structure Mapping in Analogy and Similarity. Am. Psychol. 1997, 52, 45–56. [Google Scholar] [CrossRef]
  12. Loewenstein, J.; Gentner, D. Spatial Mapping in Preschoolers: Close Comparisons Facilitate Far Mappings. J. Cogn. Dev. 2001, 2, 189–219. [Google Scholar] [CrossRef] [Green Version]
  13. Tilly, C. Big Structures, Large Processes, Huge Comparisons; Russell Sage Foundation: New York, NY, USA, 1984. [Google Scholar]
  14. Azarian, R. Potentials and Limitations of Comparative Method in Social Science. Int. J. Humanit. Soc. Sci. 2011, 1, 113–125. [Google Scholar]
  15. Gervais-Lambony, P. De Lomé à Harare. Le Fait Citadin; Karthala: Paris, France, 1994. [Google Scholar]
  16. Hilker, F. Vergleichende Pädagogik; Max Hueber: Munich, Germany, 1962. [Google Scholar]
  17. Bereday, G. Comparative Method in Education; Holt, Rinehart and Winston: New York, NY, USA, 1964. [Google Scholar]
  18. Anckar, C. On the Applicability of the Most Similar Systems Design and the Most Different Systems Design in Comparative Research. Int. J. Soc. Res. Methodol. 2008, 11, 389–401. [Google Scholar] [CrossRef]
  19. Przeworski, A.; Teune, H. The Logic of Comparative Social Inquiry; John Wiley and Sons: New York, NY, USA, 1970. [Google Scholar]
  20. Zipf, G.K. Human Behavior and the Principle of Least Effort; Addison-Wesley Press: Oxford, UK, 1949. [Google Scholar]
  21. Pumain, D.; Swerts, E.; Cottineau, C.; Vacchiani-Marcuzzo, C.; Ignazzi, C.A.; Bretagnolle, A.; Delisle, F.; Cura, R.; Lizzi, L.; Baffi, S. Multilevel Comparison of Large Urban Systems. Cybergeo Eur. J. Geogr. 2015. [Google Scholar] [CrossRef]
  22. Pflugmacher, T. Wolken Vergleichen. Wolkenbilderbücher und Wolkenfotografien im Deutschunterricht. In Komparastik und Didaktik; Aisthesis Verlag: Bielefeld, Germany, 2018; pp. 139–188. [Google Scholar]
  23. Bloom, B.S. Taxonomy of Educational Objectives; McKay: New York, NY, USA, 1956. [Google Scholar]
  24. Krathwohl, D.R. A Revision of Bloom’s Taxonomy: An Overview. Theory Pract. 2002, 41, 212–218. [Google Scholar] [CrossRef]
  25. Eduscol. Thème 1—La Question Démographique et L’inégal Développement; Ministère de l’Education Nationale, de l’Enseignement supérieur et de la Recherche: Paris, France, 2016.
  26. Le Mercier, L. Clés Pour L’enseignement de L’histoire et de la Géographie. Aide à la Mise en Œuvre des Programmes de Seconde; Repères pour Agir; Editions du CRDP de l’Académie de Versailles: Versailles, France, 2010. [Google Scholar]
  27. Department for Education. Geography—GCSE Subject Content; Department for Education: London, UK, 2014.
  28. Wilcke, H.; Budke, A. Comparison as a Method for Geography Education. Educ. Sci. 2019, 9, 225. [Google Scholar] [CrossRef] [Green Version]
  29. Glaesser, J. Competence in Educational Theory and Practice: A Critical Discussion. Oxf. Rev. Educ. 2019, 45, 70–85. [Google Scholar] [CrossRef]
  30. Hartig, J.; Klieme, E.; Leutner, D. Assessment of Competencies in Educational Contexts; Hogrefe Publishing: Göttingen, Germany, 2008. [Google Scholar]
  31. Young, M.; Lambert, D.; Roberts, C.; Roberts, M. Knowledge and the Future School: Curriculum and Social Justice, 1st ed.; Bloomsbury Academic: London, UK, 2014. [Google Scholar]
  32. Klieme, E.; Merki, K.M.; Hartig, J. Kompetenzbegriff und Bedeutung von Kompetenzen im Bildungswesen. In Möglichkeiten und Voraussetzungen Technologiebasierter Kompetenzdiagnostik. Eine Expertise im Auftrag des Bundesministeriums für Bildung und Forschung; Hartig, J., Klieme, E., Eds.; BMBF: Bonn, Germany, 2007; pp. 5–15. [Google Scholar]
  33. Thémines, J.-F. Propositions pour un programme d’agir spatial: La didactique de la géographie à l’épreuve de changements curriculaires. Les Sciences de l’education-Pour l’Ere Nouvelle 2016, 49, 117–150. [Google Scholar] [CrossRef]
  34. Young, M. Bringing Knowledge Back in from Social Constructivism to Social Realism in the Sociology of Education; Routledge: London, UK, 2007. [Google Scholar] [CrossRef]
  35. Wellnitz, N.; Mayer, J. Erkenntnismethoden in der Biologie—Entwicklung und Evaluation eines Kompetenzmodells. Zeitschrift für Didaktik der Naturwissenschaften 2013, 19, 315–345. [Google Scholar]
  36. Simon, M.; Budke, A.; Schäbitz, F. The Objectives and Uses of Comparisons in Geography Textbooks: Results of an International Comparative Analysis. Heliyon 2020, 6, 1–13. [Google Scholar] [CrossRef]
  37. Schnotz, W.; Preuß, A. Task-Dependent Construction of Mental Models as a Basis for Conceptual Change. Eur. J. Psychol. Educ. 1997, XII, 185–211. [Google Scholar] [CrossRef]
  38. Kauertz, A.; Fischer, H.E.; Mayer, J.; Sumfleth, E.; Walpuski, M. Standardbezogene Kompetenzmodellierung in den Naturwissenschaften der Sekundarstufe I. Zeitschrift für Didaktik der Naturwissenschaften 2010, 16, 135–153. [Google Scholar]
  39. Becker-Mrotzek, M.; Böttcher, I. Schreibkompetenz Entwickeln und Beurteilen, 8th ed.; Cornelsen Scriptor: Berlin, Germany, 2012. [Google Scholar]
  40. Budke, A.; Meyer, M. Fachlich Argumentieren Lernen–Die Bedeutung Der Argumentation in Den Unterschiedlichen Schulfächern. In Fachlich argumentieren lernen. Didaktische Forschungen zur Argumentation in den Unterrichtsfächern; Budke, A., Kuckuck, M., Meyer, M., Schäbitz, F., Schlüter, K., Weiss, G., Eds.; Waxmann: Münster, Germany, 2015; Volume 7, pp. 9–28. [Google Scholar]
  41. Erduran, S.; Jiménez-Aleixandre, M.P. Argumentation in Science Education. Perspectives from Classroom-Based Research; Science & Technology Education Library; Springer: Dordrecht, The Netherlands, 2007. [Google Scholar] [CrossRef]
  42. Bette, J.; Bünstorf, U.; Hemmer, M.; Jansen, R.; Kersting, R.; Rahner, M. Terra Erdkunde 2—Gymnasium, 1st ed.; Ernst Klett: Stuttgart, Germany, 2017. [Google Scholar]
  43. Pepin, B.; Haggarty, L. Mathematics Textbooks and Their Use in English, French and German Classrooms. Zentralblatt für Didaktik der Mathematik 2001, 33, 158–175. [Google Scholar] [CrossRef]
  44. Tutiaux-Guillon, N. Interpréter la stabilité d’une discipline scolaire: L’histoire-géographie dans le secondaire français. In Compétences et Contenus. Les Curriculums en Questions; Audigier, F., Tutiaux-Guillon, N., Eds.; Perspectives en Education et Formation; De Boeck Supérieur: Louvain-la-Neuve, Belgium, 2008; pp. 117–146. [Google Scholar]
  45. Matthes, E.; Schütze, S. Aufgaben Im Schulbuch. Einleitung. In Aufgaben im Schulbuch; Matthes, E., Schütze, S., Eds.; Julius Klinkhardt: Bad Heilbrunn, Germany, 2011; pp. 9–15. [Google Scholar]
  46. Lee, J.; Catling, S. What Do Geography Textbook Authors in England Consider When They Design Content and Select Case Studies? Int. Res. Geogr. Environ. Educ. 2017, 26, 342–356. [Google Scholar] [CrossRef]
  47. Oates, T. Could Do Better: Using International Comparisons to Refine the National Curriculum in England. Curric. J. 2011, 22, 121–150. [Google Scholar] [CrossRef]
  48. Valverde, G.A.; Bianchi, L.J.; Wolfe, R.G.; Schmidt, W.H.; Houang, R.T. According to the Book: Using TIMSS to Investigate the Translation of Policy into Practice through the World of Textbooks; Springer Science & Business Media: New York, NY, USA, 2002. [Google Scholar]
  49. Lepik, M.; Grevholm, B.; Viholainen, A. Using Textbooks in the Mathematics Classroom—The Teachers’ View. Nord. Stud. Math. Educ. 2015, 20, 129–156. [Google Scholar]
  50. Roberts, M. Geographical Enquiry. Teach. Geogr. 2010, 35, 6–9. [Google Scholar]
  51. Ferretti, J. Whatever Happened to the Enquiry Approach in Geography? In Debates in Geography Education; Jones, M., Lambert, D., Eds.; Routledge, Taylor & Francis Group: London, UK, 2013; pp. 103–115. [Google Scholar] [CrossRef]
  52. Bette, J.; Bünstorf, U.; Hemmer, M.; Jansen, R.; Kersting, R.; Rahner, M. Terra Erdkunde 1—Gymnasium, 1st ed.; Ernst Klett: Stuttgart, Germany, 2016. [Google Scholar]
  53. Bette, J.; Bünstorf, U.; Bünten, G.; Hemmer, M.; Jansen, R.; Kersting, R. Terra Erdkunde 3—Gymnasium, 1st ed.; Ernst Klett: Stuttgart, Germany, 2018. [Google Scholar]
  54. Boeti, P.; Brodengeier, E.; Korby, W.; Kreus, A.; Pungel, S.; Meike. Terra Geographie Oberstufe, 1st ed.; Ernst Klett: Stuttgart, Germany, 2015. [Google Scholar]
  55. Amstfeld, P. Seydlitz Geografie 5/6 Berlin/Brandenburg; Schroedel: Braunschweig, Germany, 2012. [Google Scholar]
  56. Fleischfresser, L.; Hellmann, L.; Hennig, J.; Morgeneyer, F. Seydlitz Geografie 9/10 Berlin/Brandenburg; Schroedel: Braunschweig, Germany, 2016. [Google Scholar]
  57. Felsch, M.; Töppner, G.; Kort, G.; Müller, F.; Radde, D.; Seeber, C. Seydlitz Geografie Oberstufe Berlin/Brandenburg; Schroedel: Braunschweig, Germany, 2011. [Google Scholar]
  58. Plaza, N.; Vautier, S.; Barthelemy, N.; Cahu, E.; Deguffroy, T.; Fouache, L.; Guerre, S. Histoire-Géographie-EMC Cycle 3/6e—Livre Élève; Hachette: Paris, France, 2016. [Google Scholar]
  59. Plaza, N.; Vautier, S.; Barthelemy, N.; Cahu, E.; Deguffroy, T.; Fouache, L.; Guerre, S. Histoire-Géographie-EMC Cycle 4/5e—Livre Élève; Hachette: Paris, France, 2016. [Google Scholar]
  60. Plaza, N.; Vautier, S.; Barthelemy, N.; Cahu, E.; Deguffroy, T.; Fouache, L.; Guerre, S. Histoire-Géographie-EMC Cycle 4/4e—Livre Élève; Hachette: Paris, France, 2016. [Google Scholar]
  61. Plaza, N.; Vautier, S.; Barthelemy, N.; Cahu, E.; Deguffroy, T.; Fouache, L.; Guerre, S. Histoire-Géographie-EMC Cycle 4/3e—Livre Élève; Hachette: Paris, France, 2016. [Google Scholar]
  62. Janin, E.; Adamski, L.; Bories, V.; Choquet, T.; Fournier, L.; Jannot, H. Géographie 2de; Nathan: Paris, France, 2019. [Google Scholar]
  63. Janin, E.; Marques, P.; Gnahoré-Barata, C.; Bories, V.; Calvez, S.; Choquet, T.; Fournier, L. Géographie 1re; Nathan: Paris, France, 2019. [Google Scholar]
  64. Janin, E.; Bories, V.; Jannot, H.; Le Brazidec, N.; Lechat, C. Géographie Term L-ES-S; Nathan: Paris, France, 2016. [Google Scholar]
  65. Hillary, M.; Mickleburgh, J.; Stanfield, J. Think through Geography 1, 6th ed.; Pearson Education Limited: Edinburgh Gate, Harlow, Essex, UK, 2000. [Google Scholar]
  66. Hillary, M.; Mickleburgh, J.; Stanfield, J. Think through Geography 2, 6th ed.; Pearson Education Limited: Edinburgh Gate, Harlow, Essex, UK, 2001. [Google Scholar]
  67. Hillary, M.; Mickleburgh, J.; Stanfield, J. Think through Geography 3, 6th ed.; Pearson Education Limited: Edinburgh Gate, Harlow, Essex, UK, 2002. [Google Scholar]
  68. Widdowson, J.; Blackshaw, R.; King, M.; Oakes, S.; Wheeler, S.; Witherick, M. AQA GCSE (9-1)—Geography; Hodder Education: London, UK, 2016. [Google Scholar]
  69. Skinner, M.; Abbiss, P.; Banks, P.; Fyfe, H.; Whittaker, I. AQA A-Level—Geography, 4th ed.; Hodder Education: London, UK, 2016. [Google Scholar]
  70. Andersson-Bakken, E.; Jegstad, K.M.; Bakken, J. Textbook Tasks in the Norwegian School Subject Natural Sciences: What Views of Science Do They Mediate? Int. J. Sci. Educ. 2020, 42, 1320–1338. [Google Scholar] [CrossRef]
  71. Menck, P. Aufgaben. Der Dreh und Angelpunkt von Unterricht. In Aufgaben im Schulbuch; Matthes, E., Schütze, S., Eds.; Julius Klinkhardt: Bad Heilbrunn, Germany, 2011; pp. 19–29. [Google Scholar]
  72. Doyle, W. Academic Work. Rev. Educ. Res. 1983, 53, 159–199. [Google Scholar] [CrossRef]
  73. Mayring, P. Qualitative Content Analysis: Theoretical Foundation, Basic Procedures and Software Solution; GESIS Leibniz Institut für Sozialwissenschaften: Klagenfurt, Austria, 2014. [Google Scholar]
  74. Landis, J.R.; Koch, G.G. The Measurement of Observer Agreement for Categorical Data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef] [Green Version]
  75. Roberts, M. Learning through Enquiry: Making Sense of the Key Stage 3 Classroom; Geographical Association: Sheffield, UK, 2003. [Google Scholar]
  76. Roberts, M. Powerful Knowledge and Geographical Education. Curric. J. 2014, 25, 187–209. [Google Scholar] [CrossRef]
  77. Klein, P. Using Inquiry to Enhance the Learning and Appreciation of Geography. J. Geogr. 1995, 94, 358–367. [Google Scholar]
  78. Budke, A. Potentiale der Politischen Bildung im Geographieunterricht. In Politische Bildung im Geographieunterricht; Budke, A., Kuckuck, M., Eds.; Franz Steiner Verlag: Stuttgart, Germany, 2016; pp. 11–23. [Google Scholar]
  79. Pykett, J. Making Citizens in the Classroom: An Urban Geography of Citizenship Education? Urban Stud. 2009, 46, 803–823. [Google Scholar] [CrossRef]
  80. Biddulph, M.; Bèneker, T.; Mitchell, D.; Hanus, M.; Leininger-Frézal, C.; Zwartjes, L.; Donert, K. Teaching Powerful Geographical Knowledge—A Matter of Social Justice: Initial Findings from the GeoCapabilities 3 Project. Int. Res. Geogr. Environ. Educ. 2020, 29, 1–15. [Google Scholar] [CrossRef] [Green Version]
  81. Osborne, J. Teaching Scientific Practices: Meeting the Challenge of Change. J. Sci. Teach. Educ. 2014, 25, 177–196. [Google Scholar] [CrossRef]
  82. Pingel, F. UNESCO Guidebook on Textbook Research and Textbook Revision, 2nd ed.; UNESCO: Paris, France, 2010. [Google Scholar]
  83. Niclot, D. L’analyse systémique des manuels scolaires de géographie et la notion de système manuel. Travaux de l’Institut de Géographie de Reims 2002, 28, 103–131. [Google Scholar] [CrossRef]
  84. Colin, P.; Heitz, C.; Gaujal, S.; Giry, F.; Leininger-Frézal, C.; Leroux, X. Raisonner, raisonnements en géographie scolaire. Géocarrefour 2019, 93. [Google Scholar] [CrossRef]
  85. Budke, A. Und der Zukunft abgewandt. Ideologische Erziehung im Geographieunterricht der DDR; V&R Unipress: Göttingen, Germany, 2010. [Google Scholar]
  86. Lambert, D.; Balderstone, D. Learning to Teach Geography in the Secondary School: A Companion to School Experience; Routledge: London, UK, 2012. [Google Scholar]
  87. Budke, A. Förderung von Argumentationskompetenzen in Aktuellen Geographieschulbüchern. In Aufgaben im Schulbuch; Matthes, E., Schütze, S., Eds.; Julius Klinkhardt: Bad Heilbrunn, Germany, 2011; pp. 253–263. [Google Scholar]
  88. Graves, N.; Murphy, B. Research into Geography Textbooks. In Reflective Practice in Geography Teaching; Kent, A., Ed.; Paul Chapman Publishing: London, UK, 2000; pp. 228–237. [Google Scholar]
  89. Peck, J. Cities beyond Compare? Reg. Stud. 2015, 49, 160–182. [Google Scholar] [CrossRef]
  90. Robinson, J. Cities in a World of Cities: The Comparative Gesture. Int. J. Urban Reg. Res. 2011, 35, 1–23. [Google Scholar] [CrossRef]
  91. Osborne, J. Science for Citizenship. In Good Practice in Science Teaching: What Research Has to Say; Osborne, J., Dillon, J., Eds.; Open University Press McGraw-Hill Education: Maidenhead, UK, 2010; pp. 46–67. [Google Scholar]
  92. Budke, A. “Ich argumentiere, also verstehe ich”. Über die Bedeutung von Kommunikation und Argumentation im Geographieunterricht. In Kommunikation und Argumentation; Budke, A., Ed.; Diercke: Braunschweig, Germany, 2012; pp. 5–18. [Google Scholar]
  93. Krause, U.; Béneker, T.; Van Tartwijk, J.; Uhlenwinkel, A.; Bolhuis, S. How Do the German and Dutch Curriculum Contexts Influence (the Use of) Geography Textbooks? RIGEO 2017, 7, 235–263. [Google Scholar]
Figure 1. Method of the comparison step by step. [28] (p. 8).
Figure 1. Method of the comparison step by step. [28] (p. 8).
Sustainability 12 08344 g001
Table 1. Competency model for comparison in geography education (own elaboration).
Table 1. Competency model for comparison in geography education (own elaboration).
Competency Levels1st Dimension: Planning and Implementation of Comparison Processes2nd Dimension: Reflection and Argumentative Justification of Comparison Processes3rd Dimension: Interrelation of Geographical Information4th Dimension: Achievement of Comparison Objectives
Level 4Students can carry out comparisons within a self-selected question by independently selecting comparison units, comparison variables and material.Students can justify their answer to the question argumentatively. They can argumentatively justify the choice of question, comparison units, comparison variables and material, and reflect on the limits of the comparison process.Students can compare two or more comparison units using two or more variables and arrive at a meaningful answer to the question by weighting the variables and reflecting on underlying contexts or concepts.Students can build rules/models (nomothetic process), better understand examples (idiographic process), or build a typology through comparison.
Level 3Students can carry out comparisons within a given question. They independently select two or three elements of the comparison among the units, variables and material used to compare. Students can justify their answer to the question argumentatively. They can argumentatively justify the choice of two elements of the comparison: either units, variables or material, and reflect on the limits of the comparison process.Students can compare two or more comparison units using two variables and arrive at a meaningful answer to the question by weighting the variables.Students can test a model or define processes or consistencies through comparison.
Level 2Students can carry out comparisons within a given question. They independently select one element of the comparison: either units, variables, or the material used to compare. Students can justify their answer to the question argumentatively. They can argumentatively justify the choice of one element of the comparison: either units, variables or material, and reflect on the limits of the comparison process.Students can compare two or more comparison units using two variables and arrive at a meaningful answer to the question.Students can apply a model or identify changes through comparison.
Level 1Students can carry out comparisons within a given question with given units, given variables and given material. Students can justify their answer to the question of the comparison argumentatively.Students can compare two or more comparison units using one variable and arrive at a meaningful answer to the question.Students can juxtapose or rank units to compare.
Table 2. List of variables used in the textbook analysis. Own elaboration.
Table 2. List of variables used in the textbook analysis. Own elaboration.
Analysed ElementsVariablesLevels
Variables used
to analyse the 1st dimension of comparison
competency
Autonomy in the definition of a question or a problem to solveYes/No
Autonomy in the selection of comparison unitsYes/No
Autonomy in the selection of material to analyseYes/No
Autonomy in the selection of comparison variablesYes/No
Variables
used to analyse the 2nd dimension of comparison
competency
Argumentation explicitly required in the task formulation to explain the resultsYes/No
Argumentation to justify the choice of material required in the task formulationYes/No
Argumentation on the comparison variables required in the task formulationYes/No
Argumentation on the comparison units required in the task formulationYes/No
Argumentation on the question/problem required in the task formulationYes/No
Variables
used to analyse the 3rd dimension of comparison
competency
Number of comparison variables to use to compare1, 2, 3…10 +
Weighting of variablesYes/No
Reflexion on concepts/contextsYes/No
Variables
used to analyse the 4th dimension of comparison
competency
Objectives of comparisonLevels 1, 2, 3 or 4
(see Table 1)
Independent variableCountryGermany, England, France
Table 3. Classification of textbook comparison tasks in the 1st dimension: “planning and implementation of comparison processes” (see Table 1). Own elaboration.
Table 3. Classification of textbook comparison tasks in the 1st dimension: “planning and implementation of comparison processes” (see Table 1). Own elaboration.
Germany
(N = 411)
France
(N = 209)
England
(N = 361)
All
(N = 981)
Tasks achieving level 4 (where students select all elements of the comparison process: question, material, variables and units)0%0%0%0%
Tasks only achieving level 3 (where students can select two or three elements of the comparison process)4.1%2.9%9.1%5.7%
Tasks only achieving level 2 (where students can select one element of the comparison process)18.5%21.0%22.4%20.5%
Tasks achieving only level 1 (where students have no autonomy in the selection of comparison elements)77.4%76.1%68.5%73.8%
Total100%100%100%100%
Table 4. Proportion of comparison tasks enhancing competencies for planning and implementing comparison processes in relation to the countries. Own elaboration.
Table 4. Proportion of comparison tasks enhancing competencies for planning and implementing comparison processes in relation to the countries. Own elaboration.
Germany
(N = 411)
France
(N = 209)
England
(N = 361)
All
(N = 981)
Autonomy in the definition of a question or a problem to solve 0%0%0%0%
Autonomy in the selection of material to analyse 2.9%1.9%10.8%5.6%
Autonomy in the selection of comparison variables 13.6%7.6%13.6%12.3%
Autonomy in the selection of comparison units 10.7%17.7%18.6%15.1%
Table 5. Classification of textbook comparison tasks in the 2nd dimension: “reflection and argumentative justification of comparative processes” (see Table 1). Own elaboration.
Table 5. Classification of textbook comparison tasks in the 2nd dimension: “reflection and argumentative justification of comparative processes” (see Table 1). Own elaboration.
Germany
(N = 411)
France
(N = 209)
England
(N = 361)
All
(N = 981)
Tasks achieving level 4 (where students can argumentatively justify their results, select all elements, and reflect on the limits of the comparison process)0%0%0%0%
Tasks only achieving level 3 (where students can argumentatively justify their results, select two elements, and reflect on the limits of the comparison process)0.25%0%0%0.1%
Tasks only achieving level 2 (where students can argumentatively justify their results, select one element, and reflect on the limits of the comparison process)1.9%0%2.5%1.7%
Tasks achieving only level 1 (where students can argumentatively justify their results)20.4%26.3%26.9%24.1%
Tasks not explicitly requiring argumentation 77.45%73.7%70.6%74.1%
Table 6. Classification of textbook comparison tasks in the content-related 4th dimension “achievement of comparison objectives” (see Table 1). Own elaboration.
Table 6. Classification of textbook comparison tasks in the content-related 4th dimension “achievement of comparison objectives” (see Table 1). Own elaboration.
Germany
(N = 391)
France
(N = 185)
England
(N = 350)
All
(N = 926) 1
Level 4: Building rules/models, better understanding examples or building a typology through comparison.38.4%44.3%40%40.2%
Level 3: Testing a model or defining processes or consistencies through comparison.11.5%2.2%7.4%8.1%
Level 2: Applying a model or identifying changes through comparison.16.9%8.6%6.3%11.2%
Level 1: Juxtaposing or ranking units through comparison. 33.2%44.9%46.3%40.5%
Total100%100%100%100%
1 The sample here was reduced to 926 tasks, since 55 tasks had the objective of exercising media literacy via the comparison of documents. Thus, these 55 tasks did not correspond to our identified 4 types relevant to geographical content-related objectives.

Share and Cite

MDPI and ACS Style

Simon, M.; Budke, A. How Geography Textbook Tasks Promote Comparison Competency—An International Analysis. Sustainability 2020, 12, 8344. https://doi.org/10.3390/su12208344

AMA Style

Simon M, Budke A. How Geography Textbook Tasks Promote Comparison Competency—An International Analysis. Sustainability. 2020; 12(20):8344. https://doi.org/10.3390/su12208344

Chicago/Turabian Style

Simon, Marine, and Alexandra Budke. 2020. "How Geography Textbook Tasks Promote Comparison Competency—An International Analysis" Sustainability 12, no. 20: 8344. https://doi.org/10.3390/su12208344

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop