Next Article in Journal / Special Issue
The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes
Previous Article in Journal
How Does Spatial Attention Influence the Probability and Fidelity of Colour Perception?
Previous Article in Special Issue
Using Eye Movements to Understand how Security Screeners Search for Threats in X-Ray Baggage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future

by
Chia-Chien Wu
1,2,* and
Jeremy M. Wolfe
1,2,3
1
Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
2
Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
3
Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
*
Author to whom correspondence should be addressed.
Vision 2019, 3(2), 32; https://doi.org/10.3390/vision3020032
Submission received: 1 April 2019 / Revised: 9 June 2019 / Accepted: 18 June 2019 / Published: 20 June 2019
(This article belongs to the Special Issue Eye Movements and Visual Cognition)

Abstract

:
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging technology. For example, little is known about how radiologists search through 3D volumes of image data because they simply did not exist when earlier eye tracking studies were performed. Moreover, improvements in the affordability and portability of modern eye trackers make other, new studies practical. Here, we review some uses of eye movements in the study of medical image perception with an emphasis on newer work. We ask how basic research on scene perception relates to studies of medical ‘scenes’ and we discuss how tracking experts’ eyes may provide useful insights for medical education and screening efficiency.

1. Introduction

Detection and diagnosis in medicine are frequently based on analysis of medical images. Clinicians of various specializations consume a vast volume of medical images each day. They perform remarkable tasks with these images but they are not perfect. For instance, though more than two million new cases of breast cancer and lung cancer were diagnosed worldwide in 2018 according to the report from World Cancer Research Fund (worldwide cancer data, https://www.wcrf.org/dietandcancer/cancer-trends/worldwide-cancer-data), we know that many cancers are not discovered, even though they may be visible in the image (e.g., [1,2,3,4]). Though the acceptance of routine cancer screening has risen and the imaging technology has continued to advance, false negative and false positive rates continue to be higher than we would expect or desire [5,6]. Measuring the eye movements of experts as they perform medical image perception tasks is one way to identify possible weak spots in the processes of medical image perception, raising the possibility of interventions that could improve performance. Eye tracking can also be used to assess the effectiveness of those interventions. Finally, from the vantage point of the basic science of perception, expert performance with medical images can give us insight into the processes of more ordinary acts of scene perception.
One of the interesting aspects of medical image perception research is that the stimuli keep changing. Fifty years ago, questions about medical image perception would have been questions about static 2D, achromatic x-ray images, presented on film. Today, technologies, like CT, create a set of virtual slices through some volume of the body (e.g., the chest) and produce a 3D volume of image data to be examined [7]. They could look at nuclear medicine images (e.g., positron emission tomography—PET) or ultrasound [8], where a 3D dataset might be rendered as a rotating figure. Furthermore, many images are in color today [9]. The dataset might be 4-dimensional, with a time-varying fourth dimension as in CT angiography where a contrast agent is injected into the bloodstream and tracked in 3D as it sweeps the heart, brain, etc. [10]. Thus, there are three spatial dimensions and the images evolve over time [11], creating 4D datasets.
Each advance in technology requires (or should require) a new set of psychophysical studies because each new modality presents new opportunities and challenges for the perceptual and cognitive capabilities of the observers. What we learned about search strategies or patterns of errors in 2D films may be only loosely applicable to newer forms of imaging. It is nearly impossible to develop better viewing strategies without understanding how human perception works in these new modalities.
Though the technology changes, many of the basic perceptual issues do not. Kundel, Nodine and their colleagues and students have worked for many years on a set of issues that remain relevant today. We will organize this brief review starting with the scanpaths that can be measured during visual search, since the sequences of eye movements are the basic data that is collected in eye movement studies in medical image research. Once scanpaths are collected, they can be aggregated in various ways to address other questions such as the extent of the “useful field of view” (e.g., How much of the image can be processed around the current point of fixation?) and the nature of search errors (e.g., Was the missed cancer fixated during search?). The topic of “satisfaction of search” is an extension of the topic of search errors. Finally, we will discuss what people can perceive in a single glimpse—the ‘gist’ that can be extracted when there is no scanpath at all.
For other important topics in the field of eye movements in medical image perception (e.g., medical training and education), there are other useful reviews: For example, [12,13,14].

2. Scanpaths: Searching in Scenes and Medical Images

The sequence of saccades and fixations made when an observer views an image is known as the “scanpath” [15,16]. It has long been used as a clue to what observers are doing when they are looking at something. Essentially all eye movement studies begin by recording a set of scanpaths. Scanpaths tend to be substantially different each time an image is viewed. Thus, for many of the studies described below, scanpaths are aggregated in ways that lose the precision of space and time that a single scanpath would possess. For example, it is often useful to measure the likelihood that a specific location in the image was fixated in the scan paths of many observers. In such an analysis, temporal order will be sacrificed to create a spatial map of fixations.
Yarbus [17] famously held that the scanpath could reveal what an observer was thinking about when viewing a scene, though that may not be as straightforward as he thought [18] (also see [19,20,21]). In the absence of eye tracking, people are surprisingly ineffective at knowing which part of an image they have looked at. We [22,23] investigated how well people could recall their own fixations after a brief period of scene search. In Võ et al [22]., observers were asked to perform a change detection task. They viewed a scene for 3 s and then saw a new version of the scene and were asked what had changed. On 25% of trials, after viewing the first scene for three seconds, observers were asked to click on 12 locations that they thought they had fixated. Humans make voluntary eye movements about 4 times per second; hence, 12 fixations in 3 s. People were not random, but the results show that observers’ memories for where they had looked in the scene were no better than their guesses about where someone else might have looked in the same scene. That is, you might know that it would be reasonable to look at the coffee mug on the desk while viewing an office scene, but you have no privileged access to whether you actually looked at the mug. Kok et al. [23] went on to demonstrate that online feedback has only a marginal effect on memory for the scanpath. They used a gaze-contingent window during the search to highlight where observers were looking and it was still difficult for those observers to maintain a representation of where they had looked once the search was done.
Obviously, the failure to remember where you have looked before could contribute to errors in medical image reading because radiologists may also have poor representations of which areas they examined in an image. This could be particularly true for 3D volumes of image data like CT and MR. What have you really “looked at”, once you have scrolled through a sequence of images covering the volume of the chest, for example (see Figure 1)? This is closely related to the question of the UFOV, which is discussed below. With eye tracking, it would be possible to give feedback to a radiologist who is completing a search: For example, an eye-tracking computer might tell the observer, “You may have a good reason, but do you know you have not looked at this entire region?” or “You spent a lot of time looking at this spot. You did not label it as abnormal but it clearly grabbed your attention. Do you want to reconsider before you move on?” Kundel, Nodine, & Krupinski [24] found that giving feedback of this second sort to radiologists had a positive impact, and explorations of this type of intervention continue [25]. However, in other contexts, being told where you have fixated may not be that useful [26,27].
The increasing use of techniques like computed tomography (CT) has converted the measuring of the scanpath from a 2D to a 3D problem. Many modern imaging technologies create 3D volumes of image data. These are often rendered into ‘stacks’ of 2D images. The reader typically scrolls back and forth through the stack while examining the currently visible 2D image (see Figure 1).
Thus, the eyes move in the XY plane while movement in Z, depth in the stack, is typically controlled by the observer through the workstation.
There has been a limited amount of research into search through 3D volumes of image data [28,29,30,31,32,33]. These 3D volumes of data represent an increase in the information/images that observers need to process. They also, necessarily, change observers’ eye movements patterns from a 2D search strategy in X&Y dimensions to a 3D search in X, Y, & Z. Drew et al. [34] had 24 radiologists search for lung nodules in stacks of images drawn from patients undergoing testing for lung cancer. As shown in Figure 2, to visualize the data, they color-coded the slices so that each quadrant had its own hue. Then they plotted that hue (a coarse measure of XY position) as a function of time in the search and the Z dimension, the slice in depth. They reported that radiologists could be coarsely split into two groups: “drillers” who moved rapidly in depth while staying in a relatively constant spot in the XY plane. In contrast, scanners moved slowly in depth while looking much more widely in the current XY plane.
We know that viewing lung CT images as an observer-controlled ‘movie’ is superior to viewing slices in a 2D array [35] but we don’t yet have enough data to know if we should be recommending ‘driller’ or ‘scanner’ strategies. Nevertheless, this type of scanpath research is an example of how eye movement recording can change our understanding of how the behavior of radiologists (or other experts) responds to changes in technology.
Once revealed, drilling and scanning seem like rather natural categories of scanpaths for 3D volumes of images. However, different technologies and different tasks may produce different oculomotor approaches. In breast imaging, the analog to the set of slices produced by CT is produced by digital breast tomosynthesis (DBT) [36], a somewhat different x-ray technology that also produces a stack of slices. At least one manufacturer explicitly trains radiologists to “drill” (albeit, not using that term). Nevertheless, when Aizenman et al. [28] measured scanpaths of radiologists reading DBT stacks, they found that the XYZ paths did not conform to the driller or scanner pattern from lung CT. Readers did tend to move rapidly back and forth through the depth of the breast, consistent with drilling, but they did not restrict themselves to one part of the breast in any rigorous manner. They seemed to scan and drill at the same time (“scrilling”?).
In thinking about possible differences between scanning in lung CT and breast tomosynthesis, it is worth noting that the different screening modalities are often used for different diagnostic purposes. For example, DBT is often used as a secondary diagnostic aid in addition to traditional mammography. But other 3D volumes of images, such as lung CT, serve as the primary screening tool. These different diagnostic tasks probably lead to very different scanpaths. Thus, it is important to search for general rules and also to check for those rules in multiple specific cases.
Returning to lung CT, if one looks more closely at either drilling or scanning behavior, one can see that readers are toggling quickly, back and forth between images. They may be looking for any items that might be nodules to see if they pop in and out of visibility as the observer toggles between a relatively few slices in the 3D stack. Other features, like vessels, snake through the image over many slices and would move rather than vanishing as the viewer moves a short distance through the stack. This shows one benefit of “toggling” between two (or more) nearly identical images, looking for change. Looking for change in this way is most effective when the toggled images are largely identical. Interestingly, there may be benefits even when images are not identical, as would be the case with a pair of mammograms of the same breast, taken at two different times. Drew et al. [37] asked radiologists to compare positive cases of mammograms with the negative prior exams acquired 2-3 years earlier from the same patients. Radiologists read the current and prior stimuli either in Side-By-Side mode or in Toggle mode. In Toggle mode, the radiologist could alternate between current and prior images at the same location on the screen. In Side-By-Side mode, current and prior images were visible at the same time on the screen. Drew et al. found that toggling produced a substantial improvement in time (~15%) and a small improvement in accuracy (~6%). The time benefit seems to result from the reduction in the number of required eye movements. In side-by-side viewing, readers made many saccades between the two images. Even though saccades are “cheap”, enough of them can add up to a real cost in time. The possible benefits of toggling for accuracy deserve further study.

3. The Useful Field of View (UFOV)

Scanpaths raise an interesting problem. It is self-evident that observers move their eyes around an extended scene in order to see it better. But what exactly does that mean? It is clear that, as you fixate on this word here, right now, you can see the whole screen or page in front of you, but if your task is to read letters, your useful field of view (UFOV) is much smaller. You can only actually do that task in a narrow area around the current point of fixation. Some constraints on the UFOV are based on fundamental visual properties. Turning to Figure 3, if you fixate on the “1”, you can read the letter “A” but if you fixate on “2”, the decline in acuity away from the point of fixation will probably mean that you cannot read the letter “P”. If you fixate on 3, you will find that it is hard to resolve the letter “C” in the middle of the string “DCT”, even though it is the same size as the clearly visible “A” in Line 1 above. In Line 3, the problem is crowding [38,39], a reduction in our ability to process/recognize one object if it is surrounded by others.
Beyond these basic visual limits, there is also an important attentional limit on the UFOV (e.g., [40]). This is illustrated in Figure 4.
If you fixate on the ‘x’ at the center of this clock face, you should note that you can read each number in turn. If, however, you need to determine which number is out of sequence, you may find that a single fixation of a quarter to a third of a second is not adequate to do the task. If you were lucky, the ‘2′ in the 7 o’clock position was inside your UFOV for this task at this moment. If not, if you only had one fixation, you would have missed it.
The same constraints that shape the UFOV in tasks like those illustrated in Figure 3 and Figure 4, will also apply in medical image search. When radiologists search a breast or a lung for a possibly cancerous mass, they will have a UFOV specific to the current image and task (e.g., the UFOV will be larger if the current target is larger). That clinical UFOV, like any other, will be affected by acuity, crowding and attentional factors. Thus, having an estimate of the UFOV can help us to understand whether the radiologist did or did not “look” at the whole image.
Sanders [41] was an early pioneer in this area, investigating what we are calling the UFOV though he preferred the term “functional visual field” (FVF) [42,43]. Sanders divided the visual field into three attentional areas: the stationary field, in which people can process information without moving their eyes; the eye field, in which eye movements but not head movements are required to sample the information; and the head field, in which head movements would be necessary. He found that, in a target detection task, observers barely made any eye movements when the target was presented within 30 degrees. What we have learned, over the subsequent 50 years of attention research, is that measuring the UFOV for almost any task will involve more factors than those most extensively discussed by Sanders [41]. The size of UFOV will interact with the type of images, the properties of the target and its surroundings, and with human visual and attentional capacities. In a study of searching for low contrast lung nodules, Kundel and colleagues [44] reported that “The visual field size that is most effective in detecting nodules during search has a radius of 3.5 degrees visual angle. Nodule detection may be limited by basic neurologic constraints on human scanning performance”. In a separate study, Carmody et al. [45] asked observers to look for a nodule in chest x-ray films. The images were presented only for 300 msec to simulate a single fixation. They found that detection rates dropped by one-half when the nodules were presented at 5 degrees from the fixation. Apparently, it is hard to establish a reliable UFOV measure, even if the task is restricted to something as straight-forward as search for nodules in chest x-ray. Thus, any estimate, based on a scanpath, of how much of an image was examined should be treated with caution. It will be based on assumptions about the size of UFOV. That said, statements about relative coverage are more convincing. Unless there is some reason that the UFOV might change between conditions, it is reasonable to use scanpath data to say that observers looked at more of a scene under condition A than under condition B.
UFOV questions certainly do not become simpler when the stimuli are 3D volumetric images, such as lung CT. In one of very few studies, Ebner et al. [46] have reported that the nodule detection rate on chest CT was highly correlated not only with the size of UFOV, as measured by the nodule eccentricity (“transverse distance” in their paper), but with the nodule size and local lung complexity. There are essentially no data on the UFOV in DBT to say nothing of other tasks such as CT angiography where complex 3D image data changes over time. If it was important to know what percentage of an image set was examined in some specific task, UFOV measures would need to be made for that task. It is probably more useful to ask slightly different questions, based on the scanpath. For example, when should we be concerned that a reader is not looking at enough of an image?
Thus, with the cautions described above, it is possible, given some UFOV assumptions, to use scanpaths to make statements about how much of an image or a case has been examined. However, it is by no means clear how much coverage is enough. The unthinking answer is that we would hope that the radiologist would look at the “whole image” but that is clearly incorrect. Suppose you walk into the kitchen to look for a pepper grinder that is, in fact, not present. How much of the kitchen should you look at before declaring the object to be absent (akin to declaring the breast to be normal)? Looking at “everything” would be foolish. You, as a kitchen expert, know that the pepper grinder is never on the floor, even if it could be. On the very rare occasion that it is on the floor, you might fail to find it but, most of the time, not looking at “everything” is sensible. Similarly, an expert radiologist will know that there are some parts of an image that do not require fixation. Indeed, one of the oculomotor hallmarks of developing expertise is a tendency to look at less of the image [47,48]. We can see the utility of learning what not to look at in a study by Rubin et al. [49] where they found the radiologists on average search only 26% of the lung parenchyma yet encompass 75% of nodules in their search volume”. This applies beyond radiology; for example, in Dermatology [50]. Other oculomotor metrics also change as expertise develops [51] but the pruning back of the scanpath is an important and general sign of expertise.
Returning to the idea of using eye movement feedback, discussed above, if the scanpath is to be used to warn the expert that some areas of the image are unexamined, those warnings should acknowledge that some areas simply do not need to be examined. Thus, it would be foolish to build a system that would insist that a reader fixate every part of an image before allowing the reader to move on to the next case and it might be clever if an artificial intelligence (AI) system could learn where in the image/scene a target might be and where it could never be.

4. Scanpath Signatures of Errors in Medical Image Perception

The motivation for eye movement feedback of the sort described above would be to prevent false negative (miss) errors in search. That implies that readers miss findings when they do not look at them. Not looking at a target is certainly one reason that the target might be missed, but it is not the only reason. In medical image perception, Kundel and colleagues developed a useful taxonomy of false negative errors, based on eye movement recording. For example, Kundel, Nodine & Carmody [52] recorded and analyzed eye movements from clinicians who were searching chest x-rays for lung nodules. They argued that clinicians’ eye movements could be used to distinguish three types of errors: Search, recognition, and decision errors. In various works, the terminology has varied somewhat but the core idea has remained about the same. In Figure 5, we will use those terms to describe the taxonomy. The scanpaths in Figure 5 are invented for purposes of illustration. Suppose that the yellow arrow is pointing to a finding in the breast that should be reported as suspicious. Kundel and colleagues argued that there were three different ways that this target might be missed. In a search error, the target is never fixated. A recognition error is said to occur when the eyes fixate on the target briefly and then move on, with no indication that the reader noted anything of interest. Finally, multiple and/or long fixations on a target indicate a decision error, if the reader still fails to report the finding. This pattern indicates that, implicitly or explicitly, the reader knew that this spot deserved attention but then the reader made the wrong decision and did not mark the spot as abnormal. In their lung nodule study, Kundel, Nodine & Carmody found that clinicians made about 30% search errors, 25% recognition and 45% decision errors. Similar proportions are found in a variety of studies (e.g., [47,53,54]).
How does this taxonomy of errors apply to 3D volumes of image data? The short answer is that the correct studies have not been done, but there are some hints. A major reason for moving to a 3D modality (e.g., lung CT) over a 2D one (chest x-ray) is that findings that are ambiguous in 2D are clarified by 3D. A downside of the move is that the increase in images leads to an increase in the time per case and to pressure to move quickly through the images. One might suspect that these factors would decrease the proportion of decision errors while increasing the proportion of search errors in which the target was never fixated. Drew et al. [34] found the signs of such a shift and a report by [49] that readers only search 26% of the lung in lung CT also points in that direction. The topic deserves further study. A more complete review of the work on specific types of search errors can be found in [12].

5. Incidental Findings and Satisfaction of Search Errors

Two, possibly related varieties of errors deserve some further mention. “Incidental findings” are findings that may be clinically significant but are not the primary target of the clinician’s search. A lung nodule would be a primary target in a search for signs of lung cancer. The same lung nodule would be an incidental finding if it was noticed in the course of an exam to determine if the patient had pneumonia [55]. Radiologists are typically expected to report incidental findings [56]. There is considerable debate about reporting and management of incidental findings because many of them turns out not to require any action. Raising them to attention can cause unnecessary worry and unnecessary medical care [57,58,59,60]. On the other hand, not reporting a finding that turns out to be clinically significant is a potent source of malpractice suits [61]. It is important to note that radiologists are trained to detect incidental findings. They would not typically stumble on a finding by chance. They know what they are looking for and, indeed, may have specific search strategies designed to detect problems that are not the specific focus of the case.
Medical image perception researchers do not need to solve the issue around the management of incidental findings. We can focus on reducing the number of targets that are not found at all. Clinicians cannot successfully manage what they don’t see. What kind of errors are missed incidental findings? In the eye tracking study that produced the driller/scanner data as shown in Figure 2, an image of a gorilla was inserted into the final case. It spanned five slices in the stack of CT images and was easily detectable in a variety of control conditions. Nevertheless, 20 of 24 radiologists failed to notice it [62]. The gorilla was chosen as a stimulus because it is the iconic stimulus [63] for studying the phenomenon of “inattentional blindness” [64]. The experiment showed that expertise does not immunize the expert against inattentional blindness, even when the stimuli are the subject of the expertise. When radiologists are looking for a small, round, white nodule, they are likely to miss a big visible gorilla right in front of their eyes because attention will be guided to the wrong set of basic color and shape features for gorilla detection [65].
Since the observers were being eye tracked, it was possible to apply the Kundel taxonomy to these miss errors (understanding that there are only 20 data points in total here, from the 20 miss trials). Observers spent nearly 6 s on average looking at the five slices that contained the gorilla. On average, they fixated on the gorilla itself for an average of 329 msec. Thus, most of these do not appear to be search errors. These appear to be recognition errors where the eyes landed on the gorilla but the strange identity of the item did not register with the observer. It seems quite unlikely that these could be decision errors. Of course, this does not mean that all misses or even most incidental finding errors are recognition errors. However, the result does make the point that an expert can look at something very odd in an image and, if looking for something else, fail to note the oddity.
As multiple radiologists have pointed out, gorillas are not an ideal model for incidental findings. When a radiologist misses a nodule while looking for pneumonia, she knows that lung cancer is something that can plausibly happen in a lung. Gorillas do not happen in lungs. In more recent work, we have developed a lab analog of incidental findings in which non-expert observers reliably miss over 30% of targets that they know that they are looking for [66]. Observers are looking for three specific images and, at the same time, for any members of any of three broad categories, like animals, hats, or fruit. They find the specific targets with ease but make large numbers of errors on the categorical targets. We don’t yet know if observers fixate the targets that they fail to detect because the eye tracking experiments have not been done. The results of the gorilla study would imply that we will find that observers can fixate on an elephant and still fail to report having found an animal.
Satisfaction of Search (SoS) errors are a class of false negative errors that are somewhat similar to incidental finding errors in the sense that, in both cases, one target, either the one you are searching for or the one has been found, interferes with the detection of another. In the SoS case, finding one target in an image makes it less likely that observers will report a second target compared to cases where only the second target is present [67]. The name comes from the original account of the source of these errors. It was initially proposed that, having found one target, observers were “satisfied” and, thus, abandoned the search too soon, before finding the second target. Subsequent research suggests that this theory is not correct [68,69,70] but the name has persisted, though Cain et al. [71] have tried to get the field to shift to “subsequent search misses (SSM)”.
There are two groups who have conducted the most extensive work on SoS: Berbaum and his colleagues (reviewed in [72,73]) and Cain, Adamo, Mitroff and colleagues [71]. Here, we want to focus on what eye movements can tell us about SoS errors. Kundel, Nodine, and their co-workers concluded that most SoS errors were recognition errors. The observers fixated on the missed items but only relatively briefly. If they spent a longer time, they tended to find the target [70]. Berbaum et al. [74] found that the type of task made a difference in x-ray studies of the abdomen. For some classes of radiologic exam, they found that the SoS errors tended to be search errors where the reader did not look in the right place. For other tasks, like Samuel et al. [70], they found a large proportion of recognition errors. On the other hand, in an eye tracking study of chest x-rays Berbaum et al. [75] found very few search errors, 35% recognition errors, and 58% decision errors. In that study, the readers apparently looked at the targets for some time and decided not to call them targets. Using a very different task, search for low contrast T’s among L’s, with a non-expert population of observers, Cain et al. [71] found search errors to be the largest category (37.8%), with the second largest category, at 24.3%, being a new type of error that they called “resource depletion errors”. They define resource depletion errors as errors that arise when the first target depletes working memory resources that could be used to find the second target. Recognition and Decision errors together account for only 20% of errors in their study. Clearly, SoS (or SSM) errors are not the product of a unitary mechanism in search. As shown in [71], there is a taxonomy of these errors, as there is a taxonomy of errors in search more generally.

6. Scene Gist and Medical Scene Gist

There are other emerging uses of eye tracking in medical image perception. For example, Drew and colleagues have been using eye tracking to address the effects of interruption on radiologists [76,77]. However, we want to finish this brief review with some consideration of medical image searches that involve little or no oculomotor activity. To quote Kundel [78], “Clearly much of what happens in perception precedes exhaustive visual scanning of the image.” An important part of the Kundel-Nodine model of search in medical images is a “holistic” stage lasting about a second [79,80,81] during which the observer might be processing the whole image without needing to move the eyes. Much of the basis for proposing this holistic stage comes from eye tracking studies that often show the eyes of experts moving to the target almost immediately [82,83].
Basic research in visual attention would divide these holistic effects into more than one component. “Covert attention” can be shifted more rapidly and more frequently than the eyes [84,85,86] so attention might have reached a target before the eyes had a chance to move. The first eye movement might be simply confirming what had been found. Second, a set of basic features guides the deployment of attention and the eyes [87,88]. Thus, in a search for a lung nodule, attention will be guided to a location that contained the small, white, and round features which are characteristic of a nodule. Again, the first eye movement might go to a target because the target’s features provided successful guidance about where to best deploy the eyes.
Kundel and Nodine [79] investigated whether there is any useful information available before the eyes move by asking radiologists to evaluate X-rays with only a 200 msec exposure to the images. With these particular stimuli, expert performance was nearly perfect with unlimited viewing. Surprisingly, with just a 200 msec glimpse of the image, their classification accuracy was still about 70%. In their following study, Carmody, Nodine and Kundel [89] systematically varied the exposure duration to test how detection performance changes over the first half-second of exposure. They found that, across the different level of image visibilities, the performance reached an asymptote after 240 msec. Moreover, even for the least visible cases, there was still substantial information available in the first quarter of the second. This global analysis not only occurs in lung screening, but also in the screening of mammography [90]. The Kundel-Nodine group argues that a major part of the development of expertise is a growing ability to do this holistic processing. They invoke this holistic stage to explain why experts make fewer eye movements.
Interestingly, there is an aspect of this holistic stage of processing that does not serve to direct the eyes to the target. Evans et al. [91] showed radiologists a brief flash of a mammogram for durations of from 250 msec to 2 s and asked them to classify this case as normal or abnormal (Would you call back this patient?). They found that radiologists could perform at above chance levels at all stimulus durations, even those that did not permit a voluntary eye movement. Importantly, this awareness of abnormality did not appear to be based on visible features of a lesion because when radiologists were also asked to place a localizing mark on the most suspicious location on a white outline mask of the breast after the brief presentation of stimulus, their performance was not better than chance. This was true regardless of their rated degree of confidence that the presented image was abnormal. Readers were not simply getting lucky and fixating a lesion on a subset of trials. In a subsequent study, Evans et al. [92] showed that this global “gist signal” was not based on a break in the normal asymmetry between breasts nor was it a proxy for breast density, a known risk factor for breast cancer. In fact, radiologists were able to discriminate between normal and abnormal at above chance levels when the “abnormal” images were the images from the breast contralateral to the breast with overt signs of cancer. Since there is no lesion presented in the image, the observed performance cannot be due to a lucky fixation on a mass. Something about the texture of the breast tissue is abnormal and experts have become sensitive to that signal. Brennan et al. [93] repeated the experiment but with the “priors”, the mammograms acquired three years before the women developed overt signs of cancer. Even though there were no localized lesions in these priors, radiologists could still detect this gist signal at an above chance level. A signal that is available years before the cancer develops could be a useful imaging biomarker of cancer risk.

7. Conclusions

In this brief review, we have tried to show the usefulness of eye tracking for understanding how experts perform tasks like those involved in clinical radiology. A similar story could be told about many other expert domains. Scanpaths tell a story. The story may not be as clear as optimistic interpretations of Yarbus’ work might suggest, but the sequence of eye movements and the placement of fixations relative to targets tell us a lot about the processes of expert visual search. Eye movement recordings are of particular interest in analyzing errors and, we may hope, in testing efforts to reduce those errors. It is notable how many of the basic issues in this field were outlined and studied by Kundel, Nodine, and their group (as well as by other labs and earlier researchers). However, their path breaking work is not the end of the story. As long as the advances in medical imaging create new stimuli to improve medical interpretation, there are always new scientific questions that need to be investigated.

Funding

This research was funded by NIH grant number: NCI: CA207490; NEI: EY017001.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boyer, B.; Hauret, L.; Bellaiche, R.; Graf, C.; Bourcier, B.; Fichet, G. Retrospectively detectable carcinomas: Review of the literature. J. Radiol. 2004, 85, 2071–2078. [Google Scholar] [CrossRef]
  2. Hoff, S.R.; Abrahamsen, A.-L.; Samset, J.H.; Vigeland, E.; Klepp, O.R.; Hofvind, S. Breast Cancer: Missed Interval and Screening-detected Cancer at Full-Field Digital Mammography and Screen-Film Mammography‚Äî Results from a Retrospective Review. Radiology 2012, 264, 378–386. [Google Scholar] [CrossRef] [PubMed]
  3. Martin, J.E.; Moskowitz, M.; Milbrath, J.R. Breast cancer missed by mammography. AJR Am. J. Roentgenol. 1979, 132, 737–739. [Google Scholar] [CrossRef] [PubMed]
  4. Pisano, E.D.; Gatsonis, C.; Hendrick, E.; Yaffe, M.; Baum, J.K.; Acharyya, S.; Conant, E.F.; Fajardo, L.L.; Bassett, L.; D’orsi, C.; et al. Diagnostic Performance of Digital versus Film Mammography for Breast-Cancer Screening. N. Engl. J. Med. 2005, 353, 1773–1783. [Google Scholar] [CrossRef] [PubMed]
  5. Le, M.T.; Mothersill, C.E.; Seymour, C.B.; Mcneill, F.E. Is the false-positive rate inmammography in North America too high? Br. J. Radiol. 2016, 89. [Google Scholar] [CrossRef] [PubMed]
  6. Seely, J.M.; Alhassan, T. Screening for breast cancer in 2018—What should we be doing today? Curr. Oncol. 2018. [Google Scholar] [CrossRef] [PubMed]
  7. Hedlund, L.W.; Anderson, R.F.; Goulding, P.L.; Beck, J.W.; Effmann, E.L.; Putman, C.E. Two methods for isolating the lung area for a CT scan for density information. Radiology 1982, 144, 353–357. [Google Scholar] [CrossRef] [PubMed]
  8. Kotsianos-Hermle, D.; Wirth, S.; Fischer, T.; Hiltawsky, K.M.; Reiser, M. First clinical use of a standardized three-dimensional ultrasound for breast imaging. Eur. J. Radiol. 2009, 71, 102–108. [Google Scholar] [CrossRef] [PubMed]
  9. Celebi, M.E.; Schaefer, G. Color Medical Image Analysis; Emre Celebi, M., Schaefer, G., Eds.; Springer: Dordrecht, The Netherlands, 2013. [Google Scholar]
  10. Moscariello, A.; Takx, R.A.; Schoepf, U.J.; Renker, M.; Zwerner, P.L.; O’Brien, T.X.; Allmendinger, T.; Vogt, S.; Schmidt, B.; Savino, G.; et al. Coronary CT angiography: Image quality, diagnostic accuracy, and potential for radiation dose reduction using a novel iterative image reconstruction technique‚Äîcomparison with traditional filtered back projection. Eur. Radiol. 2011, 21, 2130–2138. [Google Scholar] [CrossRef]
  11. Eddleman, C.S.; Jeong, H.J.; Hurley, M.C.; Zuehlsdorff, S.; Dabus, G.; Getch, C.G.; Batjer, H.H.; Bendok, B.R.; Carroll, T.J. 4D radial acquisition contrast-enhanced MR angiography and intracranial arteriovenous malformations: Quickly approaching digital subtraction angiography. Stroke 2009, 40, 2749–2753. [Google Scholar] [CrossRef] [PubMed]
  12. Brunye, T.T.; Drew, T.; Weaver, D.L.; Elmore, J.G. A Review of Eye Tracking for Understanding and Improving Diagnostic Interpretation. Cogn. Res. Princ. Implic. (CRPI) 2019, 4, 7. [Google Scholar] [CrossRef] [PubMed]
  13. Van der Gijp, A.; Ravesloot, C.J.; Jarodzka, H.; van der Schaaf, M.F.; van der Schaaf, I.C.; van Schaik, J.P.; Ten Cate, T.J. How visual search relates to visual diagnostic performance: A narrative systematic review of eye-tracking research in radiology. Adv. Health Sci. Educ. Theory Pract. 2017, 22, 765–787. [Google Scholar] [CrossRef] [PubMed]
  14. Krupinski, E.A. Current Perspectives in Medical Image Perception. Atten. Percept. Psychophys. 2010, 72, 1205–1217. [Google Scholar] [CrossRef] [PubMed]
  15. Noton, D.; Stark, L. Scanpaths in eye movements during pattern perception. Science 1971, 171, 308–311. [Google Scholar] [CrossRef]
  16. Noton, D.; Stark, L. Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vis. Res. 1971, 11, 929–942. [Google Scholar] [CrossRef]
  17. Yarbus, A.L. Eye Movements and Vision; Plenum: New York, NY, USA, 1967. [Google Scholar]
  18. Greene, M.R.; Liu, T.; Wolfe, J.M. Reconsidering Yarbus: Pattern classification cannot predict observer’s task from scan paths. Vis. Res. 2012, 62, 1–8. [Google Scholar] [CrossRef]
  19. Bahle, B.; Mills, M.; Dodd, M.D. Human Classifier: Observers can deduce task solely from eye movements. Atten. Percept. Psychophys. 2017, 79, 1415–1425. [Google Scholar] [CrossRef]
  20. Damiano, C.; Wilder, J.; Walther, D.B. Mid-level feature contributions to category-specific gaze guidance. Atten. Percept. Psychophys. 2019, 81, 35–46. [Google Scholar] [CrossRef]
  21. Kardan, O.; Berman, M.G.; Yourganov, G.; Schmidt, J.; Henderson, J.M. Classifying mental states from eye movements during scene viewing. J. Exp. Psychol. Hum. Percept. Perform. 2015, 41, 1502–1514. [Google Scholar] [CrossRef]
  22. Võ, M.L.H.; Aizenman, A.M.; Wolfe, J.M. You think you know where you looked? You better look again. J. Exp. Psychol. Hum. Percept. Perform. 2016, 42, 1477–1481. [Google Scholar] [CrossRef]
  23. Kok, E.M.; Aizenman, A.M.; Võ, M.L.H.; Wolfe, J.M. Even if I showed you where you looked, remembering where you just looked is hard. J. Vis. 2017, 17, 2. [Google Scholar] [CrossRef] [PubMed]
  24. Kundel, H.L.; Nodine, C.F.; Krupinski, E.A. Computer-displayed eye position as a visual aid to pulmonary nodule interpretation. Investig. Radiol. 1990, 25, 890–896. [Google Scholar] [CrossRef]
  25. Donovan, T.; Manning, D.J.; Crawford, T. Performance changes in lung nodule detection following perceptual feedback of eye movements. Proc. SPIE 2008, 6917. [Google Scholar] [CrossRef]
  26. Drew, T.; Williams, L.H. Simple eye-movement feedback during visual search is not helpful. Cogn. Res. Princ. Implic. 2017, 2, 44. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Peltier, C.; Becker, M.W. Eye movement feedback fails to improve visual search performance. Cogn. Res. Princ. Implic. 2017, 2, 47. [Google Scholar] [CrossRef] [Green Version]
  28. Aizenman, A.; Drew, T.; Ehinger, K.A.; Georgian-Smith, D.; Wolfe, J.M. Comparing search patterns in digital breast tomosynthesis and full-field digital mammography: An eye tracking study. J. Med. Imaging 2017, 4, 045501. [Google Scholar] [CrossRef]
  29. Den Boer, L.; van der Schaaf, M.F.; Vincken, K.L.; Mol, C.P.; Stuijfzand, B.G.; van der Gijp, A. Volumetric Image Interpretation in Radiology: Scroll Behavior and Cognitive Processes. Adv. Health Sci. Educ. 2018, 23, 783–802. [Google Scholar] [CrossRef]
  30. D’Ardenne, N.M.; Nishikawa, R.M.; Wu, C.C.; Wolfe, J.M. Occulomotor Behavior of Radiologists Reading Digital Breast Tomosynthesis (DBT). In Proceedings of the SPIE Medical Imaging, San Diego, CA, USA, 6–21 February 2019. [Google Scholar]
  31. Mercan, E.; Shapiro, L.G.; Brunyé, T.T.; Weaver, D.L.; Elmore, J.G. Characterizing Diagnostic Search Patterns in Digital Breast Pathology: Scanners and Drillers. J. Digit. Imaging 2018, 31, 32–41. [Google Scholar] [CrossRef]
  32. Timberg, P.; Lång, K.; Nyström, M.; Holmqvist, K.; Wagner, P.; Förnvik, D.; Tingberg, A.; Zackrisson, S. Investigation of viewing procedures for interpretation of breast tomosynthesis image volumes: A detection-task study with eye tracking. Eur. Radiol. 2013, 23, 997–1005. [Google Scholar] [CrossRef]
  33. Venjakob, A.C.; Mello-Thoms, C.R. Review of prospects and challenges of eye tracking in volumetric imaging. J. Med. Imaging 2015, 3. [Google Scholar] [CrossRef]
  34. Drew, T.; Vo, M.L.-H.; Olwal, A.; Jacobson, F.; Seltzer, S.E.; Wolfe, J.M. Scanners and drillers: Characterizing expert visual search through volumetric images. J. Vis. 2013, 13. [Google Scholar] [CrossRef] [PubMed]
  35. Seltzer, S.E.; Judy, P.F.; Adams, D.F.; Jacobson, F.L.; Stark, P.; Kikinis, R.; Swensson, R.G.; Hooton, S.; Head, B.; Feldman, U. Spiral CT of the chest: Comparison of cine and film-based viewing. Radiology 1995, 197, 73–78. [Google Scholar] [CrossRef] [PubMed]
  36. Baker, J.A.; Lo, J.Y. Breast Tomosynthesis: State-of-the-Art and Review of the Literature. Acad. Radiol. 2011, 18, 1298–1310. [Google Scholar] [CrossRef] [PubMed]
  37. Drew, T.; Aizenman, A.M.; Thompson, M.B.; Kovacs, M.D.; Trambert, M.; Reicher, M.A.; Wolfe, J.M. Image toggling saves time in mammography. J. Med. Imaging 2016, 3, 011003. [Google Scholar] [CrossRef] [PubMed]
  38. Levi, D.M. Crowding-An essential bottleneck for object recognition: A mini-review. Vis. Res. 2008, 48, 635–654. [Google Scholar] [CrossRef]
  39. Manassi, M.; Whitney, D. Multi-level Crowding and the Paradox of Object Recognition in Clutter. Curr. Biol. 2018, 28, R127–R133. [Google Scholar] [CrossRef] [Green Version]
  40. Hulleman, J.; Olivers, C.N.L. The impending demise of the item in visual search. Behav. Brain Sci. 2017, 40, e132. [Google Scholar] [CrossRef]
  41. Sanders, A.F. Some aspects of the selective process in the functional visual field. Ergonomics 1970, 13, 101–117. [Google Scholar] [CrossRef]
  42. Ikeda, M.; Takeuchi, T. Influence of foveal load on the functional visual field. Percept. Psychophys. 1975, 18, 255–260. [Google Scholar] [CrossRef]
  43. Sanders, A.F.; Houtmans, M.J.M. Perceptual modes in the functional visual field. Acta Psychol. 1985, 58, 251–261. [Google Scholar] [CrossRef]
  44. Kundel, H.L.; Nodine, C.F.; Thickman, D.; Toto, L. Searching for lung nodules. A comparison of human performance with random and systematic scanning models. Investig. Radiol. 1987, 22, 417–422. [Google Scholar] [CrossRef]
  45. Carmody, D.P.; Nodine, C.F.; Kundel, H.L. An analysis of perceptual and cognitive factors in radiographic interpretation. Perception 1980, 9, 339–344. [Google Scholar] [CrossRef] [PubMed]
  46. Ebner, L.; Tall, M.; Choudhury, K.R.; Ly, D.L.; Roos, J.E.; Napel, S.; Rubin, G.D. Variations in the functional visual field for detection of lung nodules on chest computed tomography: Impact of nodule size, distance, and local lung complexity: Impact. Med. Phys. 2017, 44, 3483–3490. [Google Scholar] [CrossRef] [PubMed]
  47. Krupinski, E.A. Visual scanning patterns of radiologists searching mammograms. Acad. Radiol. 1996, 3, 137–144. [Google Scholar] [CrossRef]
  48. Kelly, B.S.; Rainford, L.A.; Darcy, S.P.; Kavanagh, E.C.; Toomey, R.J. The Development of Expertise in Radiology: In Chest Radiograph Interpretation, “Expert” Search Pattern May Predate “Expert” Levels of Diagnostic Accuracy for Pneumothorax Identification. Radiology 2016, 280, 252–260. [Google Scholar] [CrossRef] [PubMed]
  49. Rubin, G.D.; Roos, J.E.; Tall, M.; Harrawood, B.; Bag, S.; Ly, D.L.; Seaman, D.M.; Hurwitz, L.M.; Napel, S.; Roy Choudhury, K. Characterizing Search, Recognition, and Decision in the Detection of Lung Nodules on CT Scans: Elucidation with Eye Tracking. Radiology 2015, 274, 276–286. [Google Scholar] [CrossRef] [PubMed]
  50. Dreiseitl, S.; Pivec, M.; Binder, M. Differences in examination characteristics of pigmented skin lesions: Results of an eye tracking study. Artif. Intell. Med. 2012, 54, 201–205. [Google Scholar] [CrossRef] [PubMed]
  51. Bertram, R.; Kaakinen, J.; Bensch, F.; Helle, L.; Lantto, E.; Niemi, P.; Lundbom, N. Eye Movements of Radiologists Reflect Expertise in CT Study Interpretation: A Potential Tool to Measure Resident Development. Radiology 2016, 281, 805–815. [Google Scholar] [CrossRef]
  52. Kundel, H.L.; Nodine, C.F.; Carmody, D. Visual scanning, pattern recognition and decision-making in pulmonary nodule detection. Investig. Radiol. 1978, 13, 175–181. [Google Scholar] [CrossRef]
  53. Hu, C.H.; Kundel, H.L.; Nodine, C.F.; Krupinski, E.A.; Toto, L.C. Searching for bone fractures: A comparison with pulmonary nodule search. Acad. Radiol. 1994, 1, 25–32. [Google Scholar] [CrossRef]
  54. Kundel, H.L.; Nodine, C.F.; Krupinski, E.A. Searching for lung nodules. Visual dwell indicates locations of false-positive and false-negative decisions. Investig. Radiol. 1989, 24, 472–478. [Google Scholar] [CrossRef]
  55. Beigelman-Aubry, C.; Hill, C.; Grenier, P.A. Management of an incidentally discovered pulmonary nodule. Eur. Radiol. 2007, 17, 449–466. [Google Scholar] [CrossRef] [PubMed]
  56. Lumbreras, B.; Donat, L.; Hernández-Aguado, I. Incidental findings in imaging diagnostic tests: A systematic review. Br. J. Radiol. 2010, 83, 276–289. [Google Scholar] [CrossRef] [PubMed]
  57. Heller, R.E. Counterpoint: A Missed Lung Nodule Is a Significant Miss. J. Am. Coll. Radiol. 2017, 14, 1552–1553. [Google Scholar] [CrossRef] [PubMed]
  58. Oren, O.; Kebebew, E.; Ioannidis, J.P.A. Curbing Unnecessary and Wasted Diagnostic Imaging. JAMA 2019, 321, 245–246. [Google Scholar] [CrossRef]
  59. Pandharipande, P.V.; Herts, B.R.; Gore, R.M.; Mayo-Smith, W.W.; Harvey, H.B.; Megibow, A.J.; Berland, L.L. Authors’ Reply. J. Am. Coll. Radiol. 2016, 13, 1025–1027. [Google Scholar] [CrossRef] [PubMed]
  60. Pandharipande, P.V.; Herts, B.R.; Gore, R.M.; Mayo-Smith, W.W.; Harvey, H.B.; Megibow, A.J.; Berland, L.L. Rethinking Normal: Benefits and Risks of Not Reporting Harmless Incidental Findings. J. Am. Coll. Radiol. 2016, 13, 764–767. [Google Scholar] [CrossRef] [PubMed]
  61. Clayton, E.W.; Haga, S.; Kuszler, P.; Bane, E.; Shutske, K.; Burke, W. Managing incidental genomic findings: Legal obligations of clinicians. Genet. Med. 2013, 15, 624–629. [Google Scholar] [CrossRef]
  62. Drew, T.; Vo, M.L.-H.; Wolfe, J.M. The Invisible Gorilla Strikes Again: Sustained Inattentional Blindness in Expert Observers. Psychol. Sci. 2013, 24, 1848–1853. [Google Scholar] [CrossRef] [PubMed]
  63. Simons, D.J.; Chabris, C.F. Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception 1999, 28, 1059–1074. [Google Scholar] [CrossRef] [PubMed]
  64. Mack, A.; Rock, I. Inattentional Blindness; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  65. Most, S.B.; Simons, D.J.; Scholl, B.J.; Jimenez, R.; Clifford, E.; Chabris, C.F. How not to be seen: The contribution of similarity and selective ignoring to sustained inattentional blindness. Psychol. Sci. 2001, 12, 9–17. [Google Scholar] [CrossRef] [PubMed]
  66. Wolfe, J.M.; Alaoui-Soce, A.; Schill, H. How did I miss that? Developing mixed hybrid visual search as a ‘model system’ for incidental finding errors in radiology. Cogn. Res. Princ. Implic. (CRPI) 2017, 2, 35. [Google Scholar] [CrossRef] [PubMed]
  67. Tuddenham, W.J. Visual search, image organization, and reader error in roentgen diagnosis. Studies of the psycho-physiology of roentgen image perception. Radiology 1962, 78, 694–704. [Google Scholar] [CrossRef] [PubMed]
  68. Berbaum, K.S.; Franken, E.A., Jr.; Dorfman, D.D.; Rooholamini, S.A.; Coffman, C.E.; Cornell, S.H.; Cragg, A.H.; Galvin, J.R.; Honda, H.; Kao, S.C. Time course of satisfaction of search. Investig. Radiol. 1991, 26, 640–648. [Google Scholar] [CrossRef]
  69. Berbaum, K.S.; Franken, E.A., Jr.; Dorfman, D.D.; Rooholamini, S.A.; Kathol, M.H.; Barloon, T.J.; Behlke, F.M.; Sato, Y.U.T.A.K.A.; Lu, C.H.; El-Khoury, G.Y.; et al. Satisfaction of search in diagnostic radiology. Investig. Radiol. 1990, 25, 133–140. [Google Scholar] [CrossRef]
  70. Samuel, S.; Kundel, H.L.; Nodine, C.F.; Toto, L.C. Mechanism of satisfaction of search: Eye position recordings in the reading of chest radiographs. Radiology 1995, 194, 895–902. [Google Scholar] [CrossRef] [PubMed]
  71. Cain, M.S.; Adamo, S.H.; Mitroff, S.R. A taxonomy of errors in multiple-target visual search. Vis. Cogn. 2013, 21, 899–921. [Google Scholar] [CrossRef]
  72. Berbaum, K.S.; Franken, E.A.; Caldwell, R.T.; Shartz, K. Satisfaction of search in traditional radiographic imaging. In The Handbook of Medical Image Perception and Techniques; Krupinski, E.A., Samei, E., Eds.; Cambridge U Press: Cambridge, UK, 2010; pp. 107–138. [Google Scholar]
  73. Berbaum, K.S.; Franken, E.A.; Caldwell, R.T.; Shartz, K.; Madsen, M. Satisfaction of search in radiology. In The Handbook of Medical Image Perception and Techniques, 2nd ed.; Samei, E., Krupinski, E.A., Eds.; Cambridge U Press: Cambridge, UK, 2019; pp. 121–166. [Google Scholar]
  74. Berbaum, K.S.; Franken, E.A., Jr.; Dorfman, D.D.; Miller, E.M.; Krupinski, E.A.; Kreinbring, K.; Caldwell, R.T.; Lu, C.H. Cause of satisfaction of search effects in contrast studies of the abdomen. Acad. Radiol. 1996, 3, 815–826. [Google Scholar] [CrossRef]
  75. Berbaum, K.S.; Franken, E.A., Jr.; Dorfman, D.D.; Miller, E.M.; Caldwell, R.T.; Kuehn, D.M.; Berbaum, M.L. Role of faulty visual search in the satisfaction of search effect in chest radiography. Acad. Radiol. 1998, 5, 9–19. [Google Scholar] [CrossRef]
  76. Drew, T.; Williams, L.H.; Aldred, B.; Heilbrun, M.E.; Minoshima, S. Quantifying the costs of interruption during diagnostic radiology interpretation using mobile eye-tracking glasses. J. Med. Imaging 2018, 5, 031406. [Google Scholar] [CrossRef] [Green Version]
  77. Williams, L.H.; Drew, T. Distraction in diagnostic radiology: How is search through volumetric medical images affected by interruptions? Cogn. Res. Princ. Implic. 2017, 2, 12. [Google Scholar] [CrossRef] [PubMed]
  78. Kundel, H.L. How to minimize perceptual error and maximize expertise in medical imaging. Proc. SPIE 2007, 6515. [Google Scholar] [CrossRef]
  79. Kundel, H.L.; Nodine, C.F. Interpreting chest radiographs without visual search. Radiology 1975, 116, 527–532. [Google Scholar] [CrossRef] [PubMed]
  80. Kundel, H.L.; Nodine, C.F.; Conant, E.F.; Weinstein, S.P. Holistic component of image perception in mammogram interpretation: Gaze-tracking study. Radiology 2007, 242, 396–402. [Google Scholar] [CrossRef] [PubMed]
  81. Kundel, H.L.; Nodine, C.F.; Krupinski, E.A.; Mello-Thoms, C. Using gaze-tracking data and mixture distribution analysis to support a holistic model for the detection of cancers on mammograms. Acad. Radiol. 2008, 15, 881–886. [Google Scholar] [CrossRef] [PubMed]
  82. Nodine, C.F.; Mello-Thoms, C. The role of expertise in radiologic image interpretation. In The Handbook of Medical Image Perception and Techniques; Krupinski, E.A., Samei, E., Eds.; Cambridge U Press: Cambridge, UK, 2010; pp. 139–156. [Google Scholar]
  83. Nodine, C.F.; Mello-Thoms, C. Acquiring expertise in radiologic image interpretation. In The Handbook of Medical Image Perception and Techniques, 2nd ed.; Samei, E., Krupinski, E.A., Eds.; Cambridge U Press: Cambridge, UK, 2019; pp. 139–156. [Google Scholar]
  84. Posner, M.I. Orienting of attention. Quart. J. Exp. Psychol. 1980, 32, 3–25. [Google Scholar] [CrossRef]
  85. Posner, M.I.; Cohen, Y. Components of attention. In Attention and Performance X; Bouma, H., Bouwhuis, D.G., Eds.; Erlbaum: Hillside, NJ, USA, 1984; pp. 55–66. [Google Scholar]
  86. Taylor, T.L.; Klein, R.M. On the causes and effects of inhibition of return. Psychon. Bull. Rev. 1999, 5, 625–643. [Google Scholar] [CrossRef]
  87. Wolfe, J.M.; Horowitz, T.S. What attributes guide the deployment of visual attention and how do they do it? Nat. Rev. Neurosci. 2004, 5, 495–501. [Google Scholar] [CrossRef]
  88. Wolfe, J.M.; Horowitz, T.S. Five factors that guide attention in visual search. Nat. Hum. Behav. 2017, 1, 0058. [Google Scholar] [CrossRef]
  89. Carmody, D.P.; Nodine, C.F.; Kundel, H.L. Finding lung nodules with and without comparative visual scanning. Percept. Psychophys. 1981, 29, 594–598. [Google Scholar] [CrossRef] [Green Version]
  90. Nodine, C.F.; Mello-Thoms, C.; Kundel, H.L.; Weinstein, S.P. Time course of perception and decision making during mammographic interpretation. AJR Am. J. Roentgenol. 2002, 179, 917–923. [Google Scholar] [CrossRef] [PubMed]
  91. Evans, K.K.; Birdwell, R.L.; Wolfe, J.M. If You Don’t Find It Often, You Often Don’t Find It: Why Some Cancers Are Missed in Breast Cancer Screening. PLoS ONE 2013, 8, e64366. [Google Scholar] [CrossRef] [PubMed]
  92. Evans, K.; Haygood, T.M.; Cooper, J.; Culpan, A.-M.; Wolfe, J.M. A half-second glimpse often lets radiologists identify breast cancer cases even when viewing the mammogram of the opposite breast. Proc. Natl. Acad. Sci. USA 2016, 113, 10292–10297. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  93. Brennan, P.C.; Gandomkar, Z.; Ekpo, E.U.; Tapia, K.; Trieu, P.D.; Lewis, S.J.; Wolfe, J.M.; Evans, K.K. Radiologists can detect the ‘gist’ of breast cancer before any overt signs of cancer appear. Sci. Rep. 2018, 8, 8717. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Eye tracking in 3D stacks of images means keeping track of the eye’s position in the XY plane and the depth (Z) of the currently displayed image.
Figure 1. Eye tracking in 3D stacks of images means keeping track of the eye’s position in the XY plane and the depth (Z) of the currently displayed image.
Vision 03 00032 g001
Figure 2. To visualize an approximation of the 3D scanpath through the lungs, position in the XY plane is coarsely color-coded into four quadrants (left-hand image). Depth (the slice in the stack) is plotted as a function of time-on-task on the right, with colors indicating the XY position. The terms “driller” and “scanner” are explained in the main text.
Figure 2. To visualize an approximation of the 3D scanpath through the lungs, position in the XY plane is coarsely color-coded into four quadrants (left-hand image). Depth (the slice in the stack) is plotted as a function of time-on-task on the right, with colors indicating the XY position. The terms “driller” and “scanner” are explained in the main text.
Vision 03 00032 g002
Figure 3. Acuity and crowding limit the UFOV.
Figure 3. Acuity and crowding limit the UFOV.
Vision 03 00032 g003
Figure 4. Fixate on the ‘x’ at the center and find the incorrect number.
Figure 4. Fixate on the ‘x’ at the center and find the incorrect number.
Vision 03 00032 g004
Figure 5. Three types of false negative (miss) errors, as proposed by Kundel and colleagues.
Figure 5. Three types of false negative (miss) errors, as proposed by Kundel and colleagues.
Vision 03 00032 g005

Share and Cite

MDPI and ACS Style

Wu, C.-C.; Wolfe, J.M. Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future. Vision 2019, 3, 32. https://doi.org/10.3390/vision3020032

AMA Style

Wu C-C, Wolfe JM. Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future. Vision. 2019; 3(2):32. https://doi.org/10.3390/vision3020032

Chicago/Turabian Style

Wu, Chia-Chien, and Jeremy M. Wolfe. 2019. "Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future" Vision 3, no. 2: 32. https://doi.org/10.3390/vision3020032

APA Style

Wu, C. -C., & Wolfe, J. M. (2019). Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future. Vision, 3(2), 32. https://doi.org/10.3390/vision3020032

Article Metrics

Back to TopTop