Next Article in Journal
Defining a Threshold Value for Maximum Spatial Information Loss of Masked Geo-Data
Previous Article in Journal
Characterizing the Heterogeneity of the OpenStreetMap Data and Community
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cognitive Themes Emerging from Air Photo Interpretation Texts Published to 1960

by
Raechel A. Bianchetti
1,* and
Alan M. MacEachren
2
1
Department of Geography, Michigan State University, 673 Auditorium Road, East Lansing, MI 48832, USA
2
Department of Geography, Pennsylvania State University, 302 Walker Building, University Park, PA 16802, USA
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2015, 4(2), 551-571; https://doi.org/10.3390/ijgi4020551
Submission received: 24 December 2014 / Revised: 18 March 2015 / Accepted: 19 March 2015 / Published: 10 April 2015

Abstract

:
Remotely sensed images are important sources of information for a range of spatial problems. Air photo interpretation emerged as a discipline in response to the need to develop a systematic method for analysis of reconnaissance photographs during World War I. Remote sensing research has focused on the development of automated methods of image analysis, shifting focus away from human interpretation processes. However, automated methods are far from perfect and human interpretation remains an important component of image analysis. One important source of information concerning human image interpretation process is textual guides written within the discipline. These early texts put more emphasis than more recent texts, on the details of the interpretation process, the role of the human in the process, and the cognitive skills involved. In the research reported here, we use content analysis to evaluate the discussion of air photo interpretation in historical texts published between 1922 and 1960. Results indicate that texts from this period emphasized the documentation of relationships between perceptual cues and images features of common interest while reasoning skill and knowledge were discussed less so. The results of this analysis provide a framework of expert image skills needed to perform image interpretation tasks. The framework is useful for informing the design of semi-automated tools for performing analysis.

1. Introduction

Remotely sensed images are rich sources of spectral, spatial, and temporal data about the earth’s surface. The unnatural vertical perspective can be awkward for non-experts to process [1]. Professional image analysts develop specialized knowledge and skills that allow them to derive information from these images beyond the cursory image information directly visible to non-experts [2]. Despite evidence that there are differences between novice and expert interpretation abilities, and a misperception of remote sensing images as authoritative and objective representations of reality, these images are continuing to be provided to non-experts for reasoning about the world they live in [3].
Remotely sensed images from space borne and airborne sensors are becoming increasingly available to the general public for knowledge acquisition and problem-solving. These non-experts are even being integrated into roles traditionally held by expert image analysts within the domains of environmental monitoring campaigns and disaster response [4,5]. Additionally, decreased costs in ultra-high-resolution imagery has caused a proliferation of its use by online media [6]. Unlike expert image analysts, who have developed skills and knowledge from years of training and experience, these non-experts are often expected to make judgments with little to no interpretation training. Early literature in air photo interpretation and remote sensing stressed the need for specialized skills and knowledge, and the distinction between expert and novice. This literature is a good starting point for determining the skills and knowledge valued by expert image analysts, and it is the goal of this paper to determine what perceptual cues, reasoning skills, and types of knowledge were valued by early experts.
Here we examine human factors of the interpreting remotely sensed images. We used content analysis, a communication analysis method, to examine three aspects of expert image interpretation: perception, cognition, and knowledge. The following section provides an overview of research relevant to understanding human factors of interpretation. It is followed by a description of the implementation of the content analysis method and materials used. The results of this content analysis follow, and a final discussion of the results concludes this paper.

1.1. Expertise

Nature does not follow rigid classifications, the interpretation of earth imagery requires an intimate knowledge of physical and cultural processes that shape the face of the earth [7]. For most scientific applications, remote sensing image interpretation is carried out by highly trained experts whose expertise is developed in response to specific domain task demands [8,9,10]. Remote sensing image experts excel beyond the abilities of non-experts in decomposing complex and ill-structured problems [11], retrieving knowledge efficiently [12], and problem solving accuracy [13]. Expert medical image analysts display similar characteristics, including greater speed and accuracy than non-experts in anomaly detection [14,15], have highly attuned selective attention [16,17], and exhibit the ability to manipulate complex medical knowledge structures [18]. Citizen sensing applications, such as TomNod [19] and GeoWiki [20], have given untrained citizens the ability to contribute to the image analysis process. Currently, these applications only require citizens to perform rudimentary analysis, typically, a search and identify process or simple classification [21]. It is likely that in future applications the goal of citizen sensing applications will adopt additional image interpretation tasks that are more complex. Questions arise as to what knowledge and skills expert image analysts have that non-experts would need to perform these more complex task.
Image interpretation expertise has been studied through the use of Cognitive Systems Engineering (CSE) [22]. Early studies eliciting expert knowledge focused on developing expert systems for automated image interpretation [23], while more recent CSE studies have addressed the design of computer systems to support integrated, human-computer image interpretation [24,25]. These behavioral studies have utilized a variety of techniques for describing work processes in meteorological [26], topographical [27], and medical image analysis [28]. Experimental approaches to understanding the qualities of expert remote sensing image analysts have focused on expertise as an influencing factor in perceptual processing for land use categorization [29], plant pathology [30], tree identification [31], change detection [32], and visual saliency [33]. These studies have found that interpretation accuracy was affected by image complexity [33,34], land use categorical specificity [29], window size [35] and the interpreter’s own domain specialization [31]. Behavioral and experimental approaches to expertise in image analysis have demonstrated the need for flexible, systematic, and multi-faceted methods for understanding complex computer-based systems.
An initial step in many CSE studies is the development of a comprehensive understanding of the problem domain [36]. One method of establishing this knowledge is the content analysis of literature written from a scientific domain. Here content analysis is used to provide unique insights into the knowledge and skills of expert image analysts in order to determine what unique traits these experts possess that are necessary for generating comprehensive knowledge about some place on the earth’s surface from remotely sensed imagery. The results of this analysis can be used to inform our expectations of non-expert interpreters, and the design of tools to assist with both expert and non-expert image interpretation.

1.2. Content Analysis

Written materials, such as manuals, reports, and instructional texts, can be excellent sources of information about domain expertise [37]. Numerous methods of text analysis have emerged [38], but content analysis is a primary approach used to support CTA studies [39]. While content analysis has been used for quite some time in historical geographic analysis [40], its integration into GIScience methods is more recent. Content analysis has been used by geographers to analyze both text documents [41] and maps [42,43,44,45]. Whether used for the analysis of text or visual media, content analysis is useful for the study of patterns emerging in communication artifacts and the process remains much the same: documents are collected, systematic data reduction is carried out using thematic coding, analysis of the codes is conducted, and the results are interpreted [46].
One of the benefits of content analysis is its flexibility, analysis goals vary and a variety of coding methods have been developed. Authors often differentiate between directed and conventional content analysis [47] and between quantitative and qualitative content analysis [48]. Directed content analysis, sometimes called deductive content analysis, applies a coding process to validate a predetermined theory [49]. Conventional, or inductive content analysis, involves the creation and refinement of codes from the texts as they arise [50]. The distinction between quantitative and qualitative methods are less clear, though it is generally accepted that quantitative methods involve some sort of numerical assessment of content while qualitative methods have only limited reliance on numerical methods [51,52]. Regardless of the specific methods used, the goal of content analysis remains to systematically identify and describe patterns in communication.
The paper presents a content analysis of a set of historically important texts on air photo interpretation. First, the content analysis process is described along with the texts used in the analysis. Second, the results of the analysis are presented and discussed in light of three facets of expertise: visual elements, reasoning, and knowledge. Finally, the implications of these results are presented in relation to image interpretation by non-experts and experts.

2. Methods

A content analysis was conducted of 16 chapters extracted from 16 texts written between 1922 and 1960. This time period was chosen for analysis because of the texts’ focus on human interpretation and the limited discussion of computer-based interpretation. As noted by Rabben, et al. [53], technical advancements were already beginning to overtake human interpretation activities by 1960. In order to systematically analyze the content of these chapters, a set of relevant codes was created and applied to the texts, and an analysis of the patterns of these codes was performed. The remainder of this section details this content analysis process.

2.1. Code Structure

The goal of this study was to identify and describe human factors of image interpretation through the analysis of the patterns in the descriptions of the interpretation process provided by historical texts written between 1922 and 1960. First, a set of codes was developed to reflect this goal. Psychological assessment of expertise is often based on reasoning, visual perception and knowledge [54,55,56]. We use these three components of interpretation to provide a framework for the analysis: visual perception, reasoning, and knowledge. A description of the code development process for each of the three categories, Interpretation Elements, Knowledge, and Reasoning Skills, follows.

2.1.1. Interpretation Elements

A directed approach to content analysis was used to develop the codes in the Interpretation Elements category since the well-established framework already exists. The original Image Interpretation Elements (IIE) framework [57] identifies nine characteristics common to all forms of photographic images: shape, size, tone, shadow, pattern, texture, site, association, and resolution. The cues facilitate the recognition of features in remotely sensed images [53,58,59]. This framework served as the starting point for creating the Interpretation Elements code category.
A revised IIE framework emerged in Estes, Hajic and Tinney [58]. In the Estes IIE framework, height replaced resolution, and color was added as a complement element to tone. Tone, the fundamental IIE, refers to the grayscale value of an image and is a function of the object’s reflectivity. Color is also a function of the object’s propensity to reflect certain areas of the electromagnetic spectrum, and for that reason it is typically grouped with Tone in the IIE framework. Tone and Color are treated as separate codes for this analysis due to their separation in the texts analyzed and the distinction between color hue and color brightness, a combination of tone and cue. This separation is also consistent with the treatment of tone and color by cartographic experts, most notably the graphic variables originally proposed by [60].
Based on consideration of the texts to be analyzed, 11 Interpretation Elements codes were developed representing any cues that have been included in the IIE framework at some point in past. The Interpretation Elements codes are presented in Table 1 below.
Table 1. Interpretation Elements code set used in this Content Analysis.
Table 1. Interpretation Elements code set used in this Content Analysis.
CodeDefinition
AssociationAssociation is the relationship between some objects that leads to the confirmation to their presence.
ColorColor is a property of an object determined by the wavelength of the light, which it reflects.
HeightThe height of a feature represents the vertical distance of an object’s top most point to the ground.
PatternPattern is the repetition of a feature characteristic, dependent upon the scale and resolution of the image.
ResolutionResolution is the ability to resolve features on the landscape. This is typically discussed as pixel size in modern day, but also boundary contrast, distance, and edge gradients.
ShadowThe shadow is caused by an absence of light, due to an object blocking it.
ShapeThe shape of the feature is a combination of the geometric properties of an object.
SiteThe location-specific features in an image that provide information unique to the place.
SizeThe size of a feature here represents the two dimensional length or width of a feature. This is differentiated here to reflect the fact that size and height are frequently discussed separately in the texts.
TextureTexture is the appearance of smoothness or roughness caused by variation in tonal values of an image.
ToneThe grayscale value that is dependent upon the reflection of light from the surface of a feature.

2.1.2. Knowledge

An initial set of codes was developed for the Knowledge category, based on several taxonomies of geographic knowledge. These taxonomies suggested differences between procedural knowledge, declarative knowledge, and experiential knowledge [61,62,63]. These three types of knowledge are similar to those identified outside of the GIScience domain, for example, De Jong and Ferguson-Hessler [64] differentiate between situational, conceptual (declarative), procedural, and strategic knowledge. Declarative and procedural knowledge are represented in both domains and experiential knowledge is relatable to situation knowledge. Strategic knowledge, however, is not task-dependent and speaks to more general problem-solving skills.
Taking into account both cognitive and GIScience descriptions of knowledge types, a set of three codes was developed. Procedural knowledge is knowledge of valid methods for task completion, specifically here we refer to the procedures used for interpretation of imagery. Conceptual knowledge is used here to represent facts and concepts from a scientific domain of analysis. Finally, Experiential knowledge addresses understanding of situations as they typically appear within the domain. Such knowledge assists the analyst in determining what information is relevant to the problem at hand and what additional information is necessary. The codes are provided with their definitions below in Table 2.
Table 2. Set of Knowledge Codes used in this Content Analysis.
Table 2. Set of Knowledge Codes used in this Content Analysis.
CodeDefinition
Procedural KnowledgeThe knowledge of how to perform interpretation including knowledge of both the tools and process of analysis.
Conceptual KnowledgeThe knowledge of facts and concepts used in the interpretation process, especially knowledge from a particular scientific domain.
Experiential KnowledgeKnowledge gained through experience in photo interpretation or in field based data collection.

2.1.3. Reasoning Skills

Reasoning is the process of logical thought and includes a number of goal-directed actions. Evidence suggests that experts develop sets of highly specialized, domain dependent reasoning skills [10,54,65]. Image interpretation tasks, as defined by modern texts, include detection, identification, delineation, enumeration, mensuration, and signification [66,67]. A broader set of tasks related to visual information processing suggest looking, seeking, comparing, hypothesis generation and testing, explaining, contextualizing, and rule-based reasoning arise from studies of diagnostic medical image analysis [68]. It would seem that these general tasks are also inherent in the image interpretation process, perhaps as sub-processes of the interpretation tasks described in [66,67].
Given that no single taxonomy emerged from descriptions of image-based reasoning tasks, it was determined that an inductive method would be most useful for developing a set of Reasoning Skills codes. An inductive method allows the data to guide the development of a code set in the absence of an established taxonomy. Two types of reasoning processes emerged from the initial coding of the excerpts. The first subset refers to types of logical reasoning. Logical reasoning is the systematic analysis of evidence. Traditionally, a division is made between inductive and deductive methods. A third concept related to induction and deduction is “convergence of evidence.” This concept, popular among earth scientists and image analysts, suggests that through evidence from multiple inductive reasoning processes, it is possible to arrive at a logical conclusion [69,70]. The second subset, Interpretation Tasks, refers to the tasks specific to image analysis. During the development of the code set, six codes emerged: search, detection, identification, comparison, judgment, measurement, and comparison. These codes are similar to the ones proposed by Campbell [67] but enumeration and delineation are noticeably absent from the historical texts as individual tasks. In total, a set of 10 codes representing Reasoning skills developed from the subset of excerpts. The Reasoning Skills codes are defined in Table 3 below.

2.2. Coding Process

Content analysis requires a body of text, a structured code set, and a systematic coding process. In total, 32 air photo interpretation texts written between 1895 and 1959 were collected, 16 of those texts contained chapters specific to the process of human image interpretation. The process is represented in Figure 1. First, level of analysis was established, and demarcated the text based on that unit, referred to hereafter as excerpts. Here, an excerpt was defined as one to three sentences communicating a single self-contained thought about cognitive aspects of air photo interpretation. Each of the 16 chapters were decomposed into a set of excerpts that described the image interpretation process. Excerpts pertaining to cognitive aspects of interpretation were transcribed into nVivo, a tool commonly used for analyzing text documents [71].
Table 3. Reasoning Skills codes used in this Content Analysis.
Table 3. Reasoning Skills codes used in this Content Analysis.
SubcategoryCodeDefinition
Logical ReasoningInductionEvidence is used to support a probable conclusion.
DeductionA necessarily true conclusion is reached based on determination of a set of verifiable truths.
Convergent EvidenceA probable conclusion is reached upon the convergence of results from inductions from multiple sources of information.
Interpretation TasksSearchThe process of visually scanning an image.
DetectionThe process of noticing an image feature.
IdentificationThe process of recognizing an image feature.
ComparisonThe process of comparing two sources of information (image features, multiple images, or other types).
JudgmentThe process of determining a characteristic of an image feature.
MeasurementThe process of measuring the relative size of an image feature.
SignificationThe process of judging the importance or utility of an image feature to solving an analytical problem.
With the source excerpts loaded into nVivo, the thematic code set described in 2.1 was added to the nVivo “project.” A project is a collection of document sources, thematic codes, and analysis files. The nVivo software allows a user to create a hierarchically arranged set of codes that can be tagged to individual excerpts. Each excerpt is then linked to each of the codes that represent the meaning of the excerpt. The first author applied the thematic codes to individual excerpts. Each of the categories was applied in turn. In order to determine the consistency of the coding process, two coding sessions, both conducted by the first author, were conducted with a four-month span of time in between. Following the second coding process, consistency between the code applications was conducted by comparing the agreement and disagreement of the two coding processes.

2.3. Analysis

The goal of this analysis was to identify human factors of image interpretation. Once the documents are coded and the thematic codes have been assessed for consistency, it is possible to analyze: (a) the frequency of code use, (b) the relationships between the codes, (c) the relationships between codes and texts, and (d) the patterns emerging in the use of codes. In the first phase of analysis, the frequency of code occurrences was used to determine what human factors were predominant in discussions of air photo interpretation. In the second phase, we examined the temporal trend in code applications by examining the relationship between the publication dates of the texts and the codes. In the third phase, co-occurrences between the codes were examined for both codes within the same category and codes in two different categories. In the following section, we describe the results of these analyses.
Figure 1. The coding process as outlined in this section.
Figure 1. The coding process as outlined in this section.
Ijgi 04 00551 g001

3. Results

This section provides details regarding the results of the content analysis. First, Section 3.1 provides a description of the texts analyzed for this study. The results of a rate and re-rate method of determining coding consistency is presented in in Section 3.2. Finally, Section 3.3 provides the results from application of thematic codes applied to the excerpts. Following a presentation of these results, a discussion of the results and their implications and final remarks provided.

3.1. Texts

The texts used in this analysis were all published prior to 1960. The year 1960 was selected as a cut-off date due to three factors of importance in the history of remote sensing. First, in 1960 the term “remote sensing” was first coined by Evelyn Pruitt in response to the emerging technologies beyond aerial photography [72]. Second, the first meteorological satellite, TIROS-1, the first earth observing satellite, was launched on April 1 of that year. The texts also predate the widespread use of computer-based image analysis. This separation of human and computer made it possible to isolate those cognitive factors that are directly related to image interpretation from other cognitive factors associated with the use of computers. The texts analyzed here also emphasized the role of black and white film for interpretation despite the availability of color film. The Kodak company patented Kodachrome color film in 1935 and it was not long afterward that it was tested for aerial flights [73]. Widespread use of color film was hindered by camera capabilities (slow shutter speeds, ill-suited lenses), processing challenges, and excessive costs.
Of the 32 books originally gathered, 16 were used for this analysis. To be considered for analysis the book had to contain a dedicated chapter on human interpretation and had to be written in English. The textual content of each chapter was used in the analysis. These books originated from the United States, England, Canada, and in one case Germany (English translation). The books included field manuals produced by government organizations, general instructional texts written by expert image analysts, and topical manuals about forestry and geologic applications. The texts were not photo interpretation keys, only textual documents. The earliest text included in the content analysis was published in 1922 by W.T. Lee [74]. Two of the biggest drivers of the development of air photo interpretation during this time period were World War I and World War II, and this influence is reflected in the number of texts published near those time periods. Seven of the texts that were used in this analysis were published between 1941 and 1944, corresponding to World War II. The publication details for all of the texts utilized in this study are provided below in Table 4.
Table 4. Information about the texts used in this analysis. (Text’s publishing origins: United States *, United Kingdom **, Germany #).
Table 4. Information about the texts used in this analysis. (Text’s publishing origins: United States *, United Kingdom **, Germany #).
Publication DateAuthorBook TitleSection Number
1944AbramsEssentials of Aerial Surveying and Photo Interpretation *9
1941BagleyAeriophotography and Autosurveying *6
1932Department of the InteriorTopical Bulletin No. 62: The Use of Aerial Photographs for Mapping *6
1942EardleyAerial Photographs: Their Use and Interpretation *4
1940HartAir Photography Applied to Surveying2
1941HeaveyMap and Aerial Photo Reading Simplified8
1922LeeThe Face of Earth As Seen From Above1
1959LeuderAerial Photographic Interpretation: Principles and Applications1
1944Lobeck and TellingtonMilitary Maps and Air Photographs *6
1959RayAerial Photographs in Geologic Interpretation and Mapping1
1929Royal War OfficeManual of Map Reading, Photo Reading, and Field Sketching **12
1959Schwidefsky and FosberryAn Outline of Photogrammetry #5
1948SpurrAerial Photographs in Forestry *3
1941U.S. War DepartmentField Manual 21–25: Elementary Map and Aerial Photograph Reading8
1928WinchesterAerial Photography18
1960RabbensManual of Photo Interpretation3
There are no established guidelines on the number of text sources needed to create a representative sample for content analysis. The sample of 16 texts was deemed appropriate based on two criteria. First, evidence from other analyses of textbooks and lab manuals have shown that results concerning problem solving processes can be obtained with as few as five sources [75]. Second, the texts are a relatively large proportion of the texts written during this time period. Texts were acquired from a number of sources, including libraries, online repositories, and private booksellers. These books were identified through analysis of bibliographies of the sources themselves. The remainder of this section details the results of the coding process.

3.2. Coding Reliability

In instances where only a single coder is used, it is possible to use rate-rerate method to compute a consistency measure [76,77]. In this process a coder codes a set of excerpts for the same characteristic and then after some pre-defined time has passed, recodes the excerpts. While this method is considered the weakest in measuring reliability [46], it does provide some quantitative measure of consistency of the coding process. Consistency was computed here using the coding comparison tool in nVivo. This tool provides the agreement and disagreement between the two coding instances, as well as the overall agreement between the two sessions. The time period between the first coding session and second coding session was four months. An overall agreement of 98% was obtained between coding session one and coding session two.
Figure 2. The average number of excerpts extracts per page from the texts, ordered by the text’s publication year. The texts are listed in the same order as provided in Table 4 above.
Figure 2. The average number of excerpts extracts per page from the texts, ordered by the text’s publication year. The texts are listed in the same order as provided in Table 4 above.
Ijgi 04 00551 g002

3.3. Code Analysis

A set of 24 codes was applied to 388 excerpts from the texts. The number of excerpts varies widely between the texts, with the minimum number being four the maximum being 118, and the average being 24.5 pages. The highest number of excerpts (118) was from [53], the chapter in the Manual of Photographic Interpretation [78]. This book was lauded as the “first comprehensive edition” by Colwell [79]. Figure 2 provides the average number of excerpts per page extracted for each text. No relationship was found between the number of excerpts or code applications and the publication year.
Each excerpt was coded exhaustively with codes from any and all of the categories outlined in Section 2.1. An analysis of the frequency, co-occurrence, and patterns in the code application follows. The results are organized by the code categories described in Section 2.1: Interpretation Elements, Reasoning Skills, and Knowledge. Table 5 provides an overview of the results summarized by the categories.
Table 5. Summary of the coding process. For each of the three main categories, the numbers in which its codes occur, the total number of excerpts assigned to that category, and the dominant code are presented.
Table 5. Summary of the coding process. For each of the three main categories, the numbers in which its codes occur, the total number of excerpts assigned to that category, and the dominant code are presented.
CategoryText OccurrencesExcerpt OccurrencesDominant Code
Interpretation Elements15 texts407 excerptsTone (n = 104)
Reasoning Skills13 texts136 excerptsIdentification (n = 46)
Knowledge14 texts91 excerptsExperience (n = 48)
Frequency analysis is commonly used in content analysis to determine the prevalence of themes in a text. In this case, the frequency of individual codes was calculated for each of the codes and their corresponding code category. Codes from the Interpretation Elements category were applied 407 times to the sources, Knowledge codes were applied 91 times, and Reasoning Skills codes were applied 136 times. It is important to remember that it is possible for each excerpt to be associated with more than one of code.
Interpretation Element codes were applied to excerpts in 15 of the 16 texts. Figure 3 provides a summary of the co-occurrence results of the coding for this category. In total, Interpretation codes were assigned to 407 excerpts, 60% of the excerpts were attributed to three codes: tone (n = 104), shadow (n = 77), and shape (n = 65). Tone was the most dominant Interpretation Elements code applied to excerpts in five texts. The co-occurrence of codes indicates a relationship between the cues. The most common co-occurrences were tone-shape (n = 22), tone-shadow (n = 20), tone-texture (n = 15), shape-shadow (n = 18), and shape-size (n = 18). Not only are the concepts tone, shadow, and shape more frequent than other perceptual cues in these texts, but their frequent co-occurrence suggests relationships with one another. Tone, being the basic unit of the image, is the level of gray value in an image. Shadow, is a dark tone of an image caused by a lack of exposure to light. Vertical shape, the geometry of an object, is inferred from the shadow of an object. Additionally, as Rabben, Chalmers Jr., Manley and Pickup [53] note, shadow is helpful in determining the characteristics of image features that lack strong tonal contrast by creating a strong tonal change at its edge.
Other Interpretation Elements codes were substantially less dominant than tone, shadow, and shape. Pattern was coded 39 times across 10 texts. It co-occurred with nine of the 11 other Interpretation Elements codes. Texture was coded 34 times across eight texts, co-occurring with nine of the other Interpretation Elements codes. The remaining codes were used less than 20 times across the set of texts: resolution (n = 7), height (n = 15), color (n = 17), and association (n = 10). Site, the location-specific features of a location, was not identified in this analysis. Instead, the authors tended to discuss general characteristics that could be used to support analysis; for example, the dark tones of a road or the rough texture of moving water.
Figure 3. The co-occurrence of Interpretation Element codes.
Figure 3. The co-occurrence of Interpretation Element codes.
Ijgi 04 00551 g003
The 136 Knowledge codes occurred in 14 texts. The co-occurrence of these codes is presented in Figure 4 below. The two predominant codes were domain knowledge (n = 23) and experiential knowledge (n = 48). Procedural knowledge was only coded six times. It is possible to breakdown Experiential Knowledge further; authors described experiential knowledge developed from field experience (n = 20) and photo interpretation experience (n = 28). There is an intrinsic relationship between the three knowledge types, as greater experience likely leads to greater procedural knowledge and knowledge of one’s domain.
A total of 136 Reasoning Skills codes were applied to excerpts from 14 of the texts. Figure 5 below provides the co-occurrence information for the application of these codes. The most extensive discussions of human reasoning relating to air photo interpretation are found in the chapter by Rabbens [53]. Overwhelmingly, Identification was the most often coded reasoning task (n = 46). This is followed by comparison (n = 20). The remaining Reasoning Skills codes were applied less than 15 times. Many of the authors discussed a process of identification indirectly during their description of perceptual cues that could be used to identify specific features. These instances are captured by the Interpretation Elements codes. Authors discussed several types of comparison. At the individual image level, comparison occurs between multiple image features. At the project level, the interpreter is expected to compare multiple images, maps, ancillary data, and evidence from direct experience in the field under study.
Figure 4. The occurrence of Knowledge codes.
Figure 4. The occurrence of Knowledge codes.
Ijgi 04 00551 g004
Figure 5. The occurrence of Reasoning Skills codes.
Figure 5. The occurrence of Reasoning Skills codes.
Ijgi 04 00551 g005
The process of comparison is related to the logical reasoning methods, especially the convergence of evidence. Convergence of evidence was coded seven times in three different sources, induction (n = 12) in four texts, and deduction (n = 11) in five texts, respectively. The codes of induction and deduction co-occurred five times. While discussions of logic and reasoning were not prevalent across most of the texts, the depth of detail provided regarding these methods in [53] suggests their importance. The authors spend two pages explicitly detailing the method of convergence of evidence, deduction, and induction. They also return to the subject of logical reasoning in their discussion of interpretation activities, like measurement.
Other reasoning and logic codes occurred less extensively. Signification (n = 11) is the process of determining the significance of the image features. An interpreter use perceptual cues in the identification of an image feature, judge its significance to the problem at hand. Additionally, during convergence of evidence they judge the significance of individual inductive conclusions [80]. The process of visual search for example was coded seven times, and in only one text. However, the text that did explicitly discuss the search process described the distinction between random and systematic search, as well as the effects of expertise on search processes. Measurement was coded 12 times in five texts, typically associated with photogrammetric measurement or the measurement of shadows to estimate height. The remaining processes, judgment (n = 3) and detection (n = 6) were coded less than 10 times across all of the texts. Both of these processes are very similar to processes of signification and identification, and it is possible that authors considered them as parts of those processes.
These results provide important insights regarding the importance of human factors to those working in the field of air photo interpretation. The texts described here emphasized the role of the Interpretation Elements in analysis (n = 407), followed by Reasoning Skills (n = 136), and Knowledge (n = 91). The process of interpretation included examination of multiple sources of evidence, including ancillary data and field data, the use of perceptual cues to examine evidence from images, and a final process of interpreting the meaning of these features, to the problem at hand.

4. Discussion

The objective of this analysis was to determine what the characteristics of expert image analysts are in order to inform the development of semi-automated methods of image analysis that could be utilized by non-experts. To achieve this objective, a content analysis of 16 texts published between 1922 and 1960. The texts represent both military and non-military authorship, multiple scientific domains, multiple countries of publication, technical manuals and books, and both specialized and general treatments of the air photo interpretation process. The results of this analysis indicate that texts from this time period emphasized the role of the Interpretation Elements over Reasoning Skils and Knowledge in their description of the image interpretation process.
The emphasis in the texts on the Interpretation Elements over Knowledge and Reasoning Skills suggests that these elements were seen as more important to the development of expertise in image interpretation. The elements are generalizable to a range of image analysis tasks beyond remote sensing image interpretation [57]. The identification process is dependent upon the use of these fundamental elements, and in many of the chapters the relationship between image object and its distinguishing elements are highlighted. A popular example given in the texts are roads. Texts suggest that roads can be distinguished from other linear objects based on their width, their shape, their tone, texture, and even color [81]. The texts describe these visual clues regarding the object identification quite regularly.
The most commonly identified interpretation elements were tone, shadow, and shape. The frequent co-occurrence of these three codes reflects the interdependent relationships between them. Early black and white photographs are comprised of numerous shades of gray (tone) reflecting the physical characteristics of the objects at the earth’s surface. The discrimination of tonal change served as the indication of an object’s shape, shadow, texture and even color. Today, sub-meter imaging and Geographic Object-Based Image Analysis (GEOBIA) have re-emphasized the role of these basic interpretation elements in feature identification [82]. Previous research has shown that even in the absence of training in these elements, non-expert image analysts can perform reasonably well in the identification of image features [79].
Reasoning about remote sensing images requires an analyst to be able to respond to visual stimuli in the context of their a priori knowledge. The authors identified three key types of knowledge that experts rely on when processing visual stimuli: procedural knowledge, domain knowledge, and experiential knowledge. Current computer based image interpretation systems fail to stand up to human interpretation of many of abstract patterns due to a lack of such knowledge [83]. Similarly, the non-expert is not equipped with the same knowledge structures and experiences as a trained expert. This lack of knowledge may limit the outcomes of their interpretations. Recent research regarding image reading literacy of children has shown that in cases where children are familiar with surroundings in imagery, they can successfully identify their local neighborhoods [84], questions remain in regards to how well such non-experts are able to interpret images of unfamiliar places.
The least discussed topic by these authors was the reasoning processes required for interpreting images. In many cases, the interpretation process was boiled down to the process of identification and signification, potentially reflecting the strong influence of the military roots of many of the authors and the discipline itself. An alternative reason for this simplified treatment of reasoning is the general lack of understanding regarding reasoning at the time. While human factors research was undertaken at the time, the majority of research focused on perceptual abilities and the selection of ideal interpretation trainees [53,79]. Today, work from medical image analysis has continued to uncover new insights regarding the reasoning processes underlying visual image analysis [37,68,85], while research in remote sensing image analysis has largely focused on the perceptual elements of interpretation [86,87]. There is a general assumption of citizen sensing applications that non-experts have similar skillsets [22]. This is problematic, as non-experts vary much more widely than experts in their knowledge and reasoning skills than experts. Here, the texts analyzed serve as an initial inventory of the types of reasoning skills that should be considered further, but failed to provide comprehensive descriptions of these expert skills. While somewhat more detailed descriptions of several interpretation tasks can be found in modern texts, such as [59,67], there is still a great need to understand the expert reasoning processes in more depth. In particular, questions remain regarding how extraction of information beyond visible evidence occurs. While evidence has shown that expert analysts are able to analyze image patterns at deeper levels than novices, little work has been done to determine what reasoning processes facilitate such analyses or how prior knowledge is utilized together with evidence of the image in those reasoning processes.
In 1993, Colwell, one of the foremost experts in image analysis during the twentieth century, suggested that “progress in the human factors was minimal” and that instead funding was being directed towards computer-assisted means of analysis. He suggested that a rebalancing of funding to both manual and computer-assisted interpretation was imperative for the development of the discipline. With the rising popularity of GEOBIA and increasing availability of high-resolution imagery to the general public, these human factors are more important than ever. Content analysis proved to be useful for identifying key components to the description of expert image analysts, but failed to provide specific details regarding the reasoning processes that facilitate the integration of visual stimuli and a priori knowledge. In order to support complex problem solving by novices, it will be necessary to deconstruct generalized interpretation tasks, such as identification, that can be taught to non-experts.

5. Conclusions

This work presented a framework for understanding the cognitive themes present in early air photo interpretation literature. The content analysis revealed three important characteristics of this early literature. First, authors emphasized the relationship between common image features and their representative visual stimuli. Second, the texts failed to provide details regarding how to reason about imagery beyond the identification of image objects using visual stimuli, and the signification of relationships between multiple image objects. Finally, the authors suggest that experts pose experiential, procedural, and domain knowledge which sets them apart from their non-expert counterparts.
In order to support complex problem solving by non-experts, it will be necessary to deconstruct generalized interpretation tasks, such as Identification, into unambiguous instructions that are interpretable by people with a wide range of skills, knowledge, and experiences. With the increasing use of remote sensing imagery in applications available to the general public, such as GoogleMaps, or citizen sensing applications, it is important to understand what information can be extracted by novices, what information is only available to those with domain expertise and experience, and how knowledge from other sources interacts with the visual interpretation process.

Acknowledgments

The authors would like to acknowledge funding from the National Science Foundation award #1233769. The authors would also like to acknowledge funding from the United States Geospatial Intelligence Foundation. The authors would also like to acknowledge the helpful feedback on the dissertation that this work was generated from the doctoral committee members, Douglas Miller, Brian Orland, and Alexander Klippel.

Author Contributions

Raechel Bianchetti completed the content analysis as part of her doctoral dissertation work. This work included gathering, transcribing, coding, and analyzing the content from the collected texts. She also wrote the body of text.
Alan MacEachren served as the doctoral adviser for this work. He provided feedback on the written dissertation and article provided here. He has also contributed to the content.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoffman, R.R.; Markman, A.B. Interpreting Remote Sensing Imagery: Human Factors, 1st ed.; CRC Press: Boca Raton, FL, USA, 2001; p. 283. [Google Scholar]
  2. Nakashima, R.; Kobayashi, K.; Maeda, E.; Yoshikawa, T.; Yokosawa, K. Visual search of experts in medical image reading: The effect of training, target prevalence, and expert knowledge. Front. Psychol. 2013, 4. [Google Scholar] [CrossRef] [PubMed]
  3. Goodchild, M.F. Citizens as sensors: The world of volunteered geography. GeoJournal 2007, 69, 211–221. [Google Scholar] [CrossRef]
  4. Fritz, S.; McCallum, I.; Schill, C.; Perger, C.; Grillmayer, R.; Achard, F.; Kraxner, F.; Obersteiner, M. Geo-wiki.Org: The use of crowdsourcing to improve global land cover. Remote Sens. 2009, 1, 345–354. [Google Scholar] [CrossRef]
  5. Meier, P. Crisis mapping in action: How open source software and global volunteer networks are changing the world, one map at a time. J. Map Geogr. Libr. 2012, 8, 89–100. [Google Scholar] [CrossRef]
  6. Parks, L. Digging into Google Earth: An analysis of “crisis in darfur”. Geoforum 2009, 40, 535–545. [Google Scholar] [CrossRef]
  7. Summerson, C. A philosophy for photo interpreters. Photogramm. Eng. 1954, 20, 396–397. [Google Scholar]
  8. Ericsson, K.A.; Lehmann, A.C. Expert and exceptional performance: Evidence of maximal adaptation to task constraints. Annu. Rev. Psychol. 1996, 47, 273–305. [Google Scholar] [CrossRef] [PubMed]
  9. Ericsson, K.A. The influence of experience and deliberate practice on the development of superior expert performance. In The Cambridge Handbook of Expertise and Expert Performance; Cambridge University Press: Cambridge, MA, USA, 2006; pp. 683–703. [Google Scholar]
  10. Werner, S.; Thies, B. Is “change blindness” attenuated by domain-specific expertise? An expert-novices comparison of change detection in football images. Vis. Cogn. 2000, 7, 163–173. [Google Scholar] [CrossRef]
  11. Glaser, R.; Chi, M.T.; Farr, M.J. The Nature of Expertise; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1988. [Google Scholar]
  12. Posner, M.I. Abstraction and the Process of Recognition; Academic Press: New York, NY, USA, 1969. [Google Scholar]
  13. Johnson, E.J. Expertise and decision under uncertainty: Performance and process. In The Nature of Expertise; Chi, M.T.H., Glaser, R., Farr, M.J., Eds.; Psycholgy Press: Hillsdale, NJ, USA, 1988; pp. 209–228. [Google Scholar]
  14. Nodine, C.F.; Kundel, H.L.; Mello-Thoms, C.; Weinstein, S.P.; Orel, S.G.; Sullivan, D.C.; Conant, E.F. How experience and training influence mammography expertise. Acad. Radiol. 1999, 6, 575–585. [Google Scholar] [CrossRef] [PubMed]
  15. Wood, G.; Batt, J.; Appelboam, A.; Harris, A.; Wilson, M.R. Exploring the impact of expertise, clinical history, and visual search on electrocardiogram interpretation. Med. Decis. Mak. 2014, 34, 75–83. [Google Scholar] [CrossRef]
  16. Myles-Worsley, M.; Johnston, W.A.; Simons, M.A. The influence of expertise on X-ray image processing. J. Exp. Psychol.: Learn. Mem. Cogn. 1988, 14, 553–557. [Google Scholar] [CrossRef]
  17. Krupinski, E.A. The role of perception in imaging: Past and future. Semin. Nucl. Med. 2011, 41, 392–400. [Google Scholar] [CrossRef] [PubMed]
  18. Schmidt, H.; Norman, G.; Boshuizen, H. A cognitive perspective on medical expertise: Theory and implication. Acad. Med. 1990, 65, 611–621. [Google Scholar] [CrossRef] [PubMed]
  19. Barrington, L.; Ghosh, S.; Greene, M.; Har-Noy, S.; Berger, J.; Gill, S.; Lin, A.Y.M.; Huyck, C. Crowdsourcing earthquake damage assessment using remote sensing imagery. Ann. Geophys. 2012, 54. [Google Scholar] [CrossRef]
  20. Fritz, S.; McCallum, I.; Schill, C.; Perger, C.; See, L.; Schepaschenko, D.; van der Velde, M.; Kraxner, F.; Obersteiner, M. Geo-wiki: An online platform for improving global land cover. Environ. Model. Softw. 2012, 31, 110–123. [Google Scholar] [CrossRef]
  21. Kerle, N. Remote sensing based post-disaster damage mapping with collaborative methods. In Intelligent Systems for Crisis Management; Springer: London, UK, 2013; pp. 121–133. [Google Scholar]
  22. Kerle, N.; Hoffman, R.R. Collaborative damage mapping for emergency response: The role of cognitive systems engineering. Nat. Hazards Earth Syst. Sci. 2013, 13, 97–113. [Google Scholar] [CrossRef]
  23. Hoffman, R.R. What is a Hill? An Analysis of the Meanings of Generic Topographic Terms; Battelle Memorial Institute: Research Triangle Park, NC, USA, 1985. [Google Scholar]
  24. Cole, K.; Stevens-Adams, S.; McNamara, L.; Ganter, J. Applying cognitive work analysis to a synthetic aperture radar system. In Engineering Psychology and Cognitive Ergonomics; Springer: Berlin, Germany, 2014; pp. 313–324. [Google Scholar]
  25. Bianchetti, R.A. Looking Back to Inform the Future: The Role of Cognition in Forest Disturbance Characterization from Remote Sensing Imagery; Pennsylvania State University: University Park, PA, USA, 2014. [Google Scholar]
  26. Hoffman, R.R. Human factors psychology in the support of forecasting: The design of advanced meteorological workstations. Weather Forecast. 1991, 6, 98–110. [Google Scholar] [CrossRef]
  27. Hoffman, R.R.; Pike, R. On the specification of the information available for the perception and description of the natural terrain. In Local Applications of the Ecological Approach to Human-Machine Systems; CRC Press: Boca Raton, FL, USA, 1995; Volume 2, pp. 285–323. [Google Scholar]
  28. Patel, V.L.; Kaufman, D.R.; Arocha, J.F. Emerging paradigms of cognition in medical decision-making. J. Biomed. Inform. 2002, 35, 52–75. [Google Scholar] [CrossRef] [PubMed]
  29. Lloyd, R.; Hodgson, M.E.; Stokes, A. Visual categorization with aerial photographs. Ann. Assoc. Am. Geogr. 2002, 92, 241–266. [Google Scholar] [CrossRef]
  30. Nilsson, H.-E. Remote sensing and image analysis in plant pathology. Can. J. Plant Pathol. 1995, 17, 154–166. [Google Scholar] [CrossRef]
  31. Medin, D.L.; Lynch, E.B.; Coley, J.D.; Atran, S. Categorization and reasoning among tree experts: Do all roads lead to rome? Cogn. Psychol. 1997, 32, 49–96. [Google Scholar] [CrossRef] [PubMed]
  32. Lansdale, M.; Underwood, G.; Davies, C. Something overlooked? How experts in change detection use visual saliency. Appl. Cogn. Psychol. 2010, 24, 213–225. [Google Scholar] [CrossRef]
  33. Davies, C.; Tompkinson, W.; Donnelly, N.; Gordon, L.; Cave, K. Visual saliency as an aid to updating digital maps. Comput. Hum. Behav. 2006, 22, 672–684. [Google Scholar] [CrossRef]
  34. Lloyd, R.; Hodgson, M.E. Visual search for land use objects in aerial photographs. Cartogr. Geogr. Inf. Sci. 2002, 29, 3–15. [Google Scholar] [CrossRef]
  35. Hodgson, M. Window size and visual image classification accuracy: An experimental approach. In 1994 Asprs Acsm Annual Convention and 2VOL; ASPRS: Falls Church, VA, USA, 1994; Volume 2, pp. 209–218. [Google Scholar]
  36. Hoffman, R.R. Methodological Preliminaries to the Development of an Expert System for Aerial Photo Interpretation; U.S. Army Corps of Engineers: Fort Belvoir, VA, USA, 1984. [Google Scholar]
  37. Mirel, B.; Eichinger, F.; Keller, B.J.; Kretzler, M. A cognitive task analysis of a visual analytic workflow: Exploring molecular interaction networks in systems biology. J. Biomed. Discov. Collab. 2011, 6, 1–33. [Google Scholar] [CrossRef] [PubMed]
  38. Carley, K. Coding choices for textual analysis: A comparison of content analysis and map analysis. Sociol. Methodol. 1993, 23, 75–126. [Google Scholar] [CrossRef]
  39. Berelson, B. Content Analysis in Communication Research; The Free Press: Gelncoe, IL, USA, 1952. [Google Scholar]
  40. Moodie, D. Content analysis: A method for historical geography. Area 1971, 3, 146–149. [Google Scholar]
  41. Hawley, A.J. Environmental perception: Nature and ellen churchill semple. Southeast. Geogr. 1968, 8, 54–59. [Google Scholar] [CrossRef]
  42. Kent, A. Topographic maps: Methodological approaches for analyzing cartographic style. J. Map Geogr. Libr. 2009, 5, 131–156. [Google Scholar] [CrossRef]
  43. Kent, A.J.; Vujakovic, P. Stylistic diversity in european state 1:50,000 topographic maps. Cartogr. J. 2009, 46, 179–213. [Google Scholar] [CrossRef]
  44. Kent, A.J.; Vujakovic, P. Cartographic language: Towards a new paradigm for understanding stylistic diversity in topographic maps. Cartogr. J. 2011, 48, 21–40. [Google Scholar] [CrossRef]
  45. Muehlenhaus, I. Genealogy that counts: Using content analysis to explore the evolution of persuasive cartography. Cartographica 2011, 46, 28–40. [Google Scholar] [CrossRef]
  46. Krippendorff, K. Content Analysis: An Introduction to Its Methodology; Sage: New York, NY, USA, 2012. [Google Scholar]
  47. Hsieh, H.-F.; Shannon, S.E. Three approaches to qualitative content analysis. Qual. Health Res. 2005, 15, 1277–1288. [Google Scholar] [CrossRef] [PubMed]
  48. Zhang, Y.; Wildemuth, B.M. Qualitative analysis of conten. In Applications of Social Research Methods to Questions in Information and Library Science; ABC-CLIO: Westport, CT, USA, 2009; pp. 308–319. [Google Scholar]
  49. Potter, W.J.; Levine-Donnerstein, D. Rethinking validity and reliability in content analysis. J. Appl. Commun. Res. 1999, 27, 258–284. [Google Scholar] [CrossRef]
  50. Kondracki, N.L.; Wellman, N.S.; Amundson, D.R. Content analysis: Review of methods and their applications in nutrition education. J. Nutr. Educ. Behav. 2002, 34, 224–230. [Google Scholar] [CrossRef] [PubMed]
  51. Sandelowski, M. Focus on research methods-whatever happened to qualitative description? Res. Nurs. Health 2000, 23, 334–340. [Google Scholar] [CrossRef] [PubMed]
  52. Marsh, E.E.; White, M.D. Content analysis: A flexible methodology. Libr. Trends 2006, 55, 22–45. [Google Scholar] [CrossRef]
  53. Rabben, E.L.; Chalmers, E.L., Jr.; Manley, E.; Pickup, J. Fundamentals of photo interpretation. In Manual of Photographic Interpretation; Colwell, R.N., Ed.; American Society of Photogrammetry and Remote Sensing: Washington, DC, USA, 1960; pp. 99–168. [Google Scholar]
  54. Klein, G.A.; Hoffman, R.R. Perceptual-cognitive aspects of expertise. In Cognitive Science Foundations of Instruction; Routledge: Hillsdale, NJ, USA, 1993; pp. 203–226. [Google Scholar]
  55. Rogers, E.; Arkin, R.C. Visual interaction: A link between perception and problem solving. In Proceedings of the 1991 IEEE International Conference on Systems, Man, and Cybernetics, Charlottesville, VA, USA, 13–16 October 1991; pp. 1265–1270.
  56. Hochberg, J. On cognition in perception: Perceptual coupling and unconscious inference. Cognition 1981, 10, 127–134. [Google Scholar] [CrossRef] [PubMed]
  57. Olson, C.E. Elements of photographic interpretation common to several sensors. Photogramm. Eng. Remote Sens. 1960, 26, 651–656. [Google Scholar]
  58. Estes, J.E.; Hajic, E.J.; Tinney, L.R. Fundamentals of image analysis: Analysis of visible and thermal infrared data. Manu. Remote Sens. 1983, 2, 1233–2440. [Google Scholar]
  59. Avery, T.E.; Berlin, G.L. Principles of air photo interpretation. In Fundamentals of Remote Sensing and Airphoto Interpretation; Maxwell Macmillan: New York, NY, USA, 1992; pp. 51–70. [Google Scholar]
  60. Bertin, J. Semiology of Graphics: Diagrams, Networks, Maps; University of Wisconsin Press: Madison, WI, USA, 1983; p. 460. [Google Scholar]
  61. Fischer, M.M.; Nijkamp, P. Geographic information systems and spatial analysis. Ann. Reg. Sci. 1992, 26, 3–17. [Google Scholar] [CrossRef]
  62. Thorndyke, P.W.; Hayes-Roth, B. Differences in spatial knowledge acquired from maps and navigation. Cogn. Psychol. 1982, 14, 560–589. [Google Scholar] [CrossRef] [PubMed]
  63. Freundschuh, S.M. Spatial Knowledge Acquisition of Urban Environments from Maps and Navigation Experience; State University of New York at Buffalo: Buffalo, NY, USA, 1992. [Google Scholar]
  64. De Jong, T.; Ferguson-Hessler, M.G. Types and qualities of knowledge. Educ. Psychol. 1996, 31, 105–113. [Google Scholar] [CrossRef]
  65. Lesgold, A.; Rubinson, H.; Feltovich, P.; Glaser, R.; Klopfer, D.; Wang, Y. Expertise in a complex skill: Diagnosing X-ray pictures. In The Nature of Expertise; Chi, M.T.H., Glaser, R., Farr, M.J., Eds.; Taylor and Francis: Hoboken, NJ, USA, 1988; pp. 311–342. [Google Scholar]
  66. Colwell, R.N. A systematic analysis of some factors affecting photographic interpretation. Photogramm. Eng. 1954, 20, 433–454. [Google Scholar]
  67. Campbell, J.B. Introduction to Remote Sensing; Guiliford Press: New York, NY, USA, 2002. [Google Scholar]
  68. Rogers, E. A cognitive theory of visual interaction. In Diagrammatic Reasoning: Cognitive and Computational Perspectives; Glasgow, J., Narayanan, N.H., Karan, B.C., Eds.; MIT Press: Cambridge, MA USA, 1995; pp. 481–500. [Google Scholar]
  69. Baker, V.R. Geosemiosis. Geol. Soc. Am. Bull. 1999, 111, 633–645. [Google Scholar] [CrossRef]
  70. Colwell, R.N. Four decades of progress in photographic interpretation since the founding of Commission VII (IP). Int. Archiv. Photogramm. Remote Sens. 1993, 29, 683–683. [Google Scholar]
  71. Bazeley, P.; Jackson, K. Qualitative Data Analysis with Nvivo; Sage Publications Limited: Thousand Oaks, CA, USA, 2013. [Google Scholar]
  72. Pruitt, E.L. The office of naval research and geography. Ann. Assoc. Am. Geogr. 1979, 69, 103–108. [Google Scholar] [CrossRef]
  73. Goddard, G.W.; Copp, D.S. Overview: A Life-Long Adventure in Aerial Photography; Doubleday: New York, NY, USA, 1969. [Google Scholar]
  74. Lee, W.T. The Face of the Earth as Seen from the Air: A Study in the Application of Airplane Photography to Geography; American Geographical Society: New York, NY, USA, 1922. [Google Scholar]
  75. Domin, D.S. A content analysis of general chemistry laboratory manuals for evidence of higher-order cognitive tasks. J. Chem. Educ. 1999. [Google Scholar] [CrossRef]
  76. Tinsley, H.E.; Brown, S.D. Handbook of Applied Multivariate Statistics and Mathematical Modeling; Academic Press: New York, NY, USA, 2000. [Google Scholar]
  77. DeVon, H.A.; Block, M.E.; Moyle-Wright, P.; Ernst, D.M.; Hayden, S.J.; Lazzara, D.J.; Savoy, S.M.; Kostas-Polston, E. A psychometric toolbox for testing validity and reliability. J. Nurs. Scholarsh. 2007, 39, 155–164. [Google Scholar] [CrossRef] [PubMed]
  78. Colwell, R.N. Manual of Photographic Interpretation; American Society of Photogrammetry: Herndon, VA, USA, 1960; p. 972. [Google Scholar]
  79. Colwell, R.N. The photo interpretation picture in 1960. Photogrammetria 1960, 16, 292–314. [Google Scholar] [CrossRef]
  80. Chamberlin, T.C. The method of multiple working hypotheses. Science 1890, 15, 92–96. [Google Scholar] [PubMed]
  81. Lee, W.T. Airplanes and geography. Geogr. Rev. 1920, 10, 310–325. [Google Scholar] [CrossRef]
  82. Hay, G.J.; Blaschke, T. Special issue: Geographic objectbased image analysis (GEOBIA). Photogramm. Eng. Remote Sens. 2009, 76, 121–122. [Google Scholar]
  83. Gardin, S.; van Laere, S.M.J.; van Coillie, F.M.B.; Anseel, F.; Duyck, W.; de Wulf, R.R.; Verbeke, L.P.C. Remote sensing meets psychology: A concept for operator performance assessment. Remote Sens. Lett. 2011, 2, 251–257. [Google Scholar] [CrossRef]
  84. Svatonova, H.; Rybansky, M. Children Observe the Digital Earth from above: How They Read Aerial and Satellite Images; IOP Publishing: Kuching, Malaysia, 2013. [Google Scholar]
  85. Rogers, E. A study of visual reasoning in medical diagnosis. In Proceedings of Eighteenth Annual Conference of the Cognitive Science Society, San Diego, CA, USA, 12–15 July 1996; pp. 213–218.
  86. Van Collie, F.; Gardin, S.; Anseel, F.; Duvuk, W.; Verbeke, L.; de Wulf, R. Variability of operator performance in remote sensing image interpretation: The importance of human and external factors. Int. J. Remote Sens. 2014, 35, 754–778. [Google Scholar] [CrossRef]
  87. Battersby, S.; Hodgson, M.E.; Wang, J. Spatial resolution imagery requirements for identifying structure damage in a hurricane disasters: A cognitive approach. Photogramm. Eng. Remote Sens. (PE&RS) 2012, 78, 625–635. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Bianchetti, R.A.; MacEachren, A.M. Cognitive Themes Emerging from Air Photo Interpretation Texts Published to 1960. ISPRS Int. J. Geo-Inf. 2015, 4, 551-571. https://doi.org/10.3390/ijgi4020551

AMA Style

Bianchetti RA, MacEachren AM. Cognitive Themes Emerging from Air Photo Interpretation Texts Published to 1960. ISPRS International Journal of Geo-Information. 2015; 4(2):551-571. https://doi.org/10.3390/ijgi4020551

Chicago/Turabian Style

Bianchetti, Raechel A., and Alan M. MacEachren. 2015. "Cognitive Themes Emerging from Air Photo Interpretation Texts Published to 1960" ISPRS International Journal of Geo-Information 4, no. 2: 551-571. https://doi.org/10.3390/ijgi4020551

Article Metrics

Back to TopTop