Next Article in Journal
Biologically-Inspired Intelligent Flocking Control for Networked Multi-UAS with Uncertain Network Imperfections
Previous Article in Journal
An Efficient Mobility Model for Improving Transmissions in Multi-UAVs Enabled WSNs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Person Identification from Drones by Humans: Insights from Cognitive Psychology

School of Psychology, University of Kent, Canterbury CT2 7NP, UK
*
Author to whom correspondence should be addressed.
Drones 2018, 2(4), 32; https://doi.org/10.3390/drones2040032
Submission received: 14 August 2018 / Revised: 21 September 2018 / Accepted: 26 September 2018 / Published: 28 September 2018

Abstract

:
The deployment of unmanned aerial vehicles (i.e., drones) in military and police operations implies that drones can provide footage that is of sufficient quality to enable the recognition of strategic targets, criminal suspects, and missing persons. On the contrary, evidence from Cognitive Psychology suggests that such identity judgements by humans are already difficult under ideal conditions, and are even more challenging with drone surveillance footage. In this review, we outline the psychological literature on person identification for readers who are interested in the real-world application of drones. We specifically focus on factors that are likely to affect identification performance from drone-recorded footage, such as image quality, and additional person-related information from the body and gait. Based on this work, we suggest that person identification from drones is likely to be very challenging indeed, and that performance in laboratory settings is still very likely to underestimate the difficulty of this task in real-world settings.

1. Background

Over the last century, the study of visual processing in Psychology has been instrumental in developing an understanding of how humans parse visual information [1,2]. This review paper summarises key insights from this field into how identity is visually derived from people. All of these studies are based on human performance data in principled scientific studies typically comprising a series of related experiments. This paper provides a short literature review of key factors that affect person identification from drone footage by human observers.
Drones are routinely deployed for police and military operations that rely on the successful identification of people. In the UK, for example, drones are employed by police to track suspects [3], as well as to search for missing persons [4]. In addition, UK and US military forces use drones for the acquisition and elimination of target persons [5]. The deployment of drones for such purposes implies that drone-recorded footage can facilitate person identification. This is difficult to reconcile with reports from personnel who remotely pilot drones, which suggest that the quality of drone footage is actually very poor [6]. These reports converge with accounts of civilian casualties [7], as well as fatalities [8,9], which have been attributed to person misidentification errors based on aerial drone footage.
These real-world errors are corroborated by an extensive literature on person identification in Cognitive Psychology. This research shows that, whilst identity can be derived from someone’s body [10] and gait [11], the face is the most useful cue for facilitating identification [12,13]. Subsequent studies have demonstrated that the recognition of familiar faces, such as those belonging to a friend, family member, or famous celebrity, tends to be highly reliable [14,15], and can be facilitated even under limited viewing conditions such as when images of faces are moderately degraded [13,16]. By contrast, viewers often struggle to distinguish one unfamiliar person from another even under tightly controlled experimental conditions, as well as fail to recognize that two photographs depict the same unfamiliar person [17,18].
This latter process is conventionally investigated via face-matching tasks [19,20], in which participants view two side-by-side face photographs, and decide whether they depict the same person, or two different individuals [21,22]. Stimuli in these tasks typically consist of high-resolution images of faces that are matched in terms of expression, pose, and lighting (see Figure 1). In addition, pairs of photographs that depict the same person are often taken just minutes apart, to minimise the natural variability that can arise within a person’s appearance over time. Under these conditions, which are designed to maximise identification accuracy, viewers make around 20% errors [21].
Such error rates represent a best-case scenario. In the real world, the identification of unfamiliar people from photographs is typically based on sub-optimal material such as variable ambient images, low-resolution closed-circuit television (CCTV) footage, and ID photographs that are only updated every ten years. There is now substantial evidence that under conditions such as these, the difficulty of person identification increases dramatically [13,23,24,25]. In addition, there is little reason to anticipate that trained experts cope with such challenges any better than novices. Some studies show, for instance, that experienced passport officers perform comparably to untrained students when comparing photographs of unfamiliar faces [26] and similarly, that police officers perform comparably to students when identifying people from poor-quality surveillance footage [13]. More recent work also suggests that even when experts do outperform novices in such tasks, substantial error rates still arise [27,28,29].
Such findings raise concern surrounding the reliability of person identification judgements based on drone footage, which can be heavily degraded or pixelated. The quality of such footage may also be further reduced due to unfavourable aerial views, unpredictable ambient conditions, and angular momentum (see Figure 2). In addition, the difficulty of identifying people from drone footage has already been highlighted by drone image analysts, who are responsible for relaying visual information from drone footage to the military personnel who operate drone weaponry. These reports suggest that the camera feed from state-of-the-art military drones can be so degraded that it is difficult to distinguish a shovel from a rifle [6], men from women [7], and even adults from children [9].
Whilst it should follow intuitively that person identification under such limited conditions is difficult, laboratory studies can provide us with a “ballpark estimate” of precisely how error-prone this task might be. Here, we review a number of studies from Cognitive Psychology that provide some insight into this question. We begin by reporting factors that influence the identification of people from the face such as, for example, image degradation, and changes in viewpoint. We then proceed to consider how identity may also be derived based on cues from the body, and whether this process can be enhanced further when viewing people in motion, as opposed to when stimuli comprise static images. Finally, we summarise a recent psychological study of person identification from aerial footage that was recorded using an aerial drone, and discuss the real-world implications of these findings.

2. Person Identification from the Face

Facial identification can be rather challenging when viewing material that is degraded or pixelated. For instance, in one investigation participants viewed a still image from poor-quality CCTV footage alongside a high-quality photograph of an unfamiliar face, and attempted to determine whether these were one person or two [30]. Error rates for this task were extremely high, with viewers incorrectly classifying almost a third of pairs as different people when they actually depicted the same person. The reverse error of this was also common, whereby nearly half of image pairs showing the same person were mistaken for different people.
This level of performance aligns with a subsequent study in which the resolution (i.e., the number of pixels) of face images was systematically reduced [31]. In this study, participants matched pairs of optimized high-resolution images (i.e., 350 pixels in width) of the same person with 90% accuracy, whilst concurrently discriminating different identities with 86% accuracy. Conversely, when viewing a high-resolution face image alongside a heavily-pixelated low-resolution counterpart (i.e., 8 pixels in width), participants were able to match same-identity face pairs with only 48% accuracy, and distinguish identity mismatches with accuracy rates of 60%.
Perhaps reassuringly, there is some suggestion that errors arising from low image resolution can be partially offset by reducing the size of moderately pixelated face images [31], or by increasing the distance between a viewer and a pixelated face image [32]. It should be noted, however, that these manipulations mitigate errors without elevating accuracy. For instance, reducing the size of pixelated face images reduces error rates from around 37% to approximately 24% [31]. In other words, under these improved conditions people could still be reliably expected to misidentify one in four pairs of faces.
How might these experimental findings translate to the identification of unfamiliar people from drones in the real world? Many laboratory studies selectively manipulate a single aspect of this task, such as image quality [31], whilst the homogeneity of other variables, such as facial orientation, remains preserved. Whilst this can be highly informative of how these specific aspects of the task influence identification performance, it remains difficult to predict how successfully drone-recorded footage might facilitate identification.
Incidentally, differences in facial pose add a further layer of difficulty to a task that is already quite challenging [17,33,34]. For instance, when comparing a frontally oriented face to one viewed from the side, participants are 10–15% more likely to mistake different identities for the same person under otherwise ideal viewing conditions than when both faces are viewed from the front [31,33]. Subsequent research has found that participants can match two frontally oriented faces as successfully as two faces that are viewed from the side [35]. This converges with earlier evidence to suggest that identification is disrupted to a greater extent by comparing two differently-oriented faces than by comparing two non-frontal faces that are viewed in the same orientation [34]. This implies that drone footage may be most effective for facilitating identification when the target person is depicted in a similar pose to a comparison image.
One further way in which facial comparison tasks in experimental settings are designed to maximise performance is to allow participants unlimited time to compare photographs of unfamiliar faces [21,36,37]. However, it is conceivable that when attempting to identify someone from drone-recorded material, the available time for making an identification may be limited for a number of practical reasons, such as if a suspected militant is only briefly exposed whilst moving from one location to another. Constraining the amount of time for which participants view faces in the laboratory exerts some intriguing effects on facial comparison performance. Studies suggest that participants require between one and two seconds to decide whether two faces depict one person or two [38,39]. Shorter display durations appear to specifically reduce observers’ capacity to distinguish different identities, by around 10% [38,40]. Similar effects are observed following the administration of time pressure, whereby compelling participants to make identity judgements more quickly also leads to an escalation in the number of different identities that are mistaken for being the same person [41,42]. In the context of drones, these represent errors of the worst kind, whereby, for example, a civilian may be wrongfully identified as a militant. If drone image analysts also experience time pressure to identify people from recorded footage, then this finding highlights yet further capacity for identification errors based on drone-recorded footage.

3. Person Identification from the Body

Person identification may not always be based on facial information alone. The distance from which people are observed may reduce the utility of the face for identification purposes considerably, for example [43]. In addition, similarities in facial appearance may result in two different people being classified as the same person when decisions are based solely on the face. Conversely, such errors may be offset by analyzing physical characteristics of the body, such as height, weight, and build.
There is some evidence that the body may support identification decisions under such conditions. For example, in one study participants attempted to identify a person from video footage filmed at far, moderate, and close distances [43]. To isolate the contribution of the face and the body for such decisions, the videos were edited to show either the whole person, the person’s face without the body, or the person’s body without the face. Across all distance conditions, to-be-identified persons were more accurately identified based on the face than on the body. Identity judgements based on the whole person (i.e. the face and the body) were also comparable in accuracy to those based on the face alone at moderate and close distances. More importantly, whole-person judgements were more accurate than for the face or the body alone when the target person was furthest away. In summary, these results reflect that when attempting to identify someone from far away, people integrate information from both the body and the face to make an identification. Conversely, as the distance between a viewer and target narrows, identity decisions become primarily driven by information from the face.
This intriguing finding converges with earlier evidence that people utilise information from the body in identification tasks under limiting conditions [10], but without consciously being aware of doing so [44]. For instance, in one study viewers rated facial features (e.g., eyes and nose) as being more useful for identification than body features (e.g., the shoulders). This was despite being more successful at this task when both the face and the body were available for analysis, as opposed to when only the face was presented [44]. Another study suggests that viewers utilise the body under more adverse conditions. For example, when comparing images of different people that happen to look very similar, the inclusion of the body appears to improve participants’ ability to distinguish one person from another. Likewise, the inclusion of the body in identity photographs also seems to aid performance when viewing very dissimilar images of the same person, as opposed to when comparing images of just the face [10]. In the context of drones, these findings therefore suggest that under adverse conditions that preclude identification from the face alone, information from the body may be utilised to enhance accuracy.

4. Person Identification from Motion

Research has also considered the role of motion in person identification. In one study, participants more accurately identified pixelated familiar faces from video footage than when these were presented as still images [32]. In addition, students in another study could accurately identify their lecturers from poor-quality surveillance footage [13]. It is worth noting, however, that in this latter study, obscuring the gait of people in video footage reduced identification accuracy slightly, by around 5%, whilst obscuring the person’s face reduced accuracy enormously, by around 60%. That is to say, the removal of gait—a motion-based cue—was less detrimental to identification performance than the removal of the face—a non-motion-based cue—when viewing degraded footage. These findings therefore suggest that when attempting to identify familiar people, gait should perhaps not be considered a crucial factor for facilitating recognition.
Indirect support for this proposal comes from other work, which found that accuracy for the identification of familiar people is similar between video and image format [45]. Perhaps more importantly, this study also found that the identification of unfamiliar people seems to be worse when viewing degraded footage, compared to when viewing a single “best” static image extracted from the source material. In the context of drones being deployed to record footage of people on the ground, this work suggests that the selection of one useful static image may enhance identification accuracy over viewing an extended video clip.
At the same time, unfamiliar people might be identified more reliably through high-quality video footage than high-quality photographs [11,46]. Yet, whilst such findings communicate that viewing dynamic versus static people can benefit identification accuracy, these investigations are limited in what they can tell us about identifying people from drone-recorded footage; the quality of which currently appears to preclude the discrimination of men from women [7].

5. Person Identification from Aerial Footage Collected by a Drone

Based on the studies described so far, it seems reasonable to assert that identifying people from drone footage is a difficult task. However, it is tricky to establish from these studies alone just how difficult this task is. Consider, for example, that accuracy is around 50% when matching heavily-pixelated images of the same unfamiliar person [31]. This level of performance represents people’s ability to compare a pixelated face to a high-resolution counterpart under conditions that are otherwise designed to maximise face-matching accuracy. Performance in the real world becomes substantially more difficult to predict when considering additional factors, such as the inclusion of the body which might improve accuracy [43], but also variations in height, position, and vantage point, which might increase errors. In other words, the accuracy of person identification from drones cannot be easily inferred based on material that was not obtained via a drone.
To date, only one study has directly investigated the extent to which person identification by humans is possible from still images and video footage that were gathered using an aerial drone [47]. In this study, a Parrot AR drone with a minimum take-off weight (MTOW) of 300 g was used to record 14 male adults playing a game of football (soccer). According to NATO taxonomy, this type of drone would fall into Class I (b), which describes minidrones that are deployed for person surveillance and target acquisition [5]. An aerial view from the perspective of the drone can be viewed in Figure 3. In addition, a close-up high-quality digital face photograph was obtained of each person on the same day.
Across several experiments, person identification from this drone-recorded footage was tested and was found to be poor. In one experiment, which was designed to provide optimized conditions to study person identification from drone-captured footage, observers viewed three still images of an unknown person that had been extracted from drone-captured video, and which were presented alongside a high-quality face photograph (see Figure 4). Participants were then asked to decide whether the person in the drone stills was the same as the person depicted in the digital face photograph. Accuracy for matching three drone images to a high-quality photograph of the same person was at 48%, whilst people could distinguish different identities with accuracy rates of 73%. A further experiment suggested that viewers could recognize familiar targets from 10-s segments of drone-captured video footage with only 33% accuracy. This is perhaps surprising, given research showing that familiar-face identification can be reliably facilitated under impoverished conditions [13,45]. Yet other studies show that even familiar faces can be misidentified when such decisions are based on photographs that are highly pixelated [16,32,48], and that a resolution “cut-off” exists at which point faces can no longer be reliably identified. Consequently, it is conceivable that the drone footage recorded by Bindemann et al. [47] was of insufficient quality to facilitate accurate recognition even of familiar people. Finally, observers who were unfamiliar with the targets in Bindemann et al.’s study were also asked to judge the sex, race, and age of targets from drone-captured images and could only do so with an accuracy of 63%, 42%, and 27%, respectively. Together, these findings illustrate that identification of both unfamiliar and familiar people from drone footage, as well as the perception of people generally, represents a very difficult task.
Even these error rates may represent a best-case scenario for person identification from drones in the real world. For example, the drone that was employed by Bindemann et al. [47] recorded people from a maximum height of 15 m. By contrast, Class I NATO drones that are deployed for surveillance and reconnaissance, for example, operate at altitudes ranging from ground-level to 15,000 ft, and at speeds of up to 80 kts [5]. In addition, weaponised Class III drones have a maximum operating altitude of 50,000 ft, and can travel at up to 250 kts [5]. Even smaller drones, such as those deployed by the police, operate at speeds of up to 38 kts and altitudes up to 400 ft [49]. Considering this range in operational parameters, we conclude that it would be unsurprising if the level of performance observed by Bindemann et al. [47] still substantially underestimates the true difficulty of identifying people from drone-recorded footage. Of course, the accuracy of this process should expectedly improve following further developments in technology that enhance the quality of recorded footage, as well as the stability of drones when airborne. However, we reiterate that person identification by humans remains difficult even under optimal viewing conditions [21,50].

6. Possible Solutions

One potential solution to this problem might be the development of person-recognition algorithms. Recent work has made progress in developing systems that are capable of tracking [51], detecting [52] and identifying individuals in drone footage [53]. In addition, automated recognition systems have demonstrated near-perfect performance in some benchmark tests [54,55]. Yet, such results encounter the same problem as with benchmark tests of human ability to identify faces, namely that such tests represent a limited proxy for the real world. Indeed, algorithms have also been found to perform substantially worse than humans in identification tests that incorporate relevant challenges from the real world, such as problematic lighting and nonfrontal poses [44,56,57]. Perhaps for this reason, these systems continue to be monitored in practical settings by humans who are responsible for verifying correct decisions made by these systems, whilst simultaneously overruling cases where the system has made an incorrect judgement [28]. Current research suggests that human observers cannot reliably detect instances where the system has made an inaccurate identification [58], implying that algorithms bias the identity judgements of humans. This means that for the foreseeable future, the final identification decision in real world settings will continue to reside with the human observer.
An alternative strategy for reducing identification errors in humans might be to recruit drone image analysts who are already highly proficient at facial comparison. A great deal of research in the last decade has focused on individual differences in facial identification [22,59,60,61,62], and it is now established that the ability to identify faces varies considerably from one person to another. For example, even under optimized conditions, some participants perform at chance level (i.e., 50%) when matching photographs of unfamiliar faces, whilst others demonstrate perfect accuracy [21]. Recent work has also identified a number of individuals—sometimes referred to as “super-recognizers”—who are remarkably good at recognizing faces even under adverse conditions [48,62]. For instance, one recent study showed that, despite heavy image pixelation, super-recognizers could distinguish images of celebrity faces from lookalikes with 93% accuracy. By contrast, student control participants could do this with only 73% accuracy [48].
Currently it is unclear as to why most super-recognizers are better at face recognition. There is some evidence that high proficiency in this task can be trained [28,56,57]. On the other hand, trained professionals have also demonstrated novice-level performance in other identification tasks [13,26,28]. In the context of person identification from drones, therefore, a possible strategy for the immediate future might be to recruit image analysts based on their performance in benchmark tests of person identification. Similar strategies are already being advocated for other settings that rely heavily on person identification, such as passport control [63], the police [64], and in banks [65].

7. Conclusions

The aim of this review was to highlight some of the challenges for human observers associated with identifying people from drones. Many obstacles to successfully identifying someone from drone-recorded footage can be anticipated based on Cognitive Psychology research. Although at present, there is no clear solution to mitigating the difficulties that can arise when attempting to make an identification using footage obtained from a drone, there are several important points that emerge from current research. The first of these is that it is already difficult to identify unfamiliar people based on optimized digital photographs of faces [21]. This means that even if one could obtain high-resolution imagery of to-be-identified persons from drone-recorded footage, some errors could still be expected to occur. A second, related point is that if drone footage produces high-quality footage, additional factors such as time pressure and changes in facial orientation must nonetheless be accounted for. Finally, identification performance in laboratory settings is likely to underestimate identification performance in the real world. Whilst experimental stimuli provide a useful basis for establishing how poor people are under controlled conditions, these cannot account for the range of conditions that arise in nature.

Author Contributions

M.C.F. drafted the initial version of this manuscript. This was then revised by M.B., who provided edits and suggestions, and contributed to writing the final version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The individual components for Figure 2 were retrieved from https://pixabay.com/.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gibson, J.J. The Visual Perception of Objective Motion and Subjective Movement 1. Psychol. Rev. 1954, 61, 304–314. [Google Scholar] [CrossRef] [PubMed]
  2. Von Helmholtz, H. Helmholtz’s Treatise on Physiological Optics (Trans. from the 3rd German Ed.), 3rd ed.; Southall, J.P.C., Ed.; Optical Society of America: New York, NY, USA, 1924. [Google Scholar]
  3. Merseyside police drone tracks car theft suspects. BBC News. 11 February 2010. Available online: http://news.bbc.co.uk/1/hi/england/merseyside/8510370.stm (accessed on 10 August 2018).
  4. Sulleyman, A. UK police to use 24-hour drone unit to investigate crimes and search for missing people. The Independent. 20 March 2017. Available online: https://www.independent.co.uk/life-style/gadgets-and-tech/news/uk-police-drones-24-hour-unit-investigate-crimes-missing-person-search-cases-cornwall-devon-forces-a7639641.html (accessed on 10 August 2018).
  5. Unmanned Aircraft Systems; Ministry of Defence: London, UK, 2017.
  6. Linebaugh, H. I worked on the US drone program. The public should know what really goes on. The Guardian. 29 December 2013. Available online: https://www.theguardian.com/commentisfree/2013/dec/29/drones-us-military (accessed on 10 August 2018).
  7. Fielding-Smith, A. “When you mess up, people die”: Civilians who are drone pilots’ extra eyes. The Guardian. 30 July 2015. Available online: https://www.theguardian.com/us-news/2015/jul/30/when-you-mess-up-people-die-civilians-who-are-drone-pilots-extra-eyes (accessed on 10 August 2018).
  8. Amnesty International. Will I Be Next?: US Drone Strikes in Pakistan; Amnesty International: London, UK, 2013. [Google Scholar]
  9. “Too easy”: Ex-drone operator on watching civilians die. BBC News. 5 December 2012. Available online: https://www.bbc.co.uk/news/av/world-us-canada-19820760/too-easy-ex-drone-operator-on-watching-civilians-die (accessed on 10 August 2018).
  10. Rice, A.; Phillips, P.J.; O’Toole, A.J. The Role of the Face and Body in Unfamiliar Person Identification. Appl. Cogn. Psychol. 2013, 27, 761–768. [Google Scholar] [CrossRef]
  11. O’Toole, A.J.; Jonathon Phillips, P.; Weimer, S.; Roark, D.A.; Ayyad, J.; Barwick, R.; Dunlop, J. Recognizing People from Dynamic and Static Faces and Bodies: Dissecting Identity with a Fusion Approach. Vis. Res. 2011, 51, 74–83. [Google Scholar] [CrossRef] [PubMed]
  12. Bruce, V.; Young, A.W. Understanding Face Recognition. Br. J. Psychol. 1986, 77, 305–327. [Google Scholar] [CrossRef] [PubMed]
  13. Burton, A.M.; Wilson, S.; Cowan, M.; Bruce, V. Face Recognition in Poor-Quality Video: Evidence from Security Surveillance. Psychol. Sci. 1999, 10, 243–248. [Google Scholar] [CrossRef]
  14. Johnston, R.A.; Edmonds, A.J. Familiar and Unfamiliar Face Recognition: A Review. Memory 2009, 17, 577–596. [Google Scholar] [CrossRef] [PubMed]
  15. Young, A.W.; Burton, A.M. Recognizing Faces. Curr. Dir. Psychol. Sci. 2017, 26, 212–217. [Google Scholar] [CrossRef] [Green Version]
  16. Jenkins, R.; Kerr, C. Identifiable Images of Bystanders Extracted from Corneal Reflections. PLoS ONE 2013, 8, 8–12. [Google Scholar] [CrossRef] [PubMed]
  17. Bruce, V.; Henderson, Z.; Greenwood, K.; Hancock, P.J.B.; Burton, A.M.; Miller, P. Verification of Face Identities from Images Captured on Video. J. Exp. Psychol. Appl. 1999, 5, 339–360. [Google Scholar] [CrossRef]
  18. Megreya, A.M.; Burton, A.M. Unfamiliar Faces Are Not Faces. Mem. Cogn. 2006, 34, 865–876. [Google Scholar] [CrossRef]
  19. Fysh, M.C.; Bindemann, M. Forensic Face Matching: A Review. In Face processing: Systems, Disorders and Cultural Differences; Bindemann, M., Megreya, A.M., Eds.; Nova Science Publishing, Inc.: New York, NY, USA, 2017; pp. 1–20. [Google Scholar]
  20. Johnston, R.A.; Bindemann, M. Introduction to Forensic Face Matching. Appl. Cogn. Psychol. 2013, 27, 697–699. [Google Scholar] [CrossRef]
  21. Burton, A.M.; White, D.; McNeill, A. The Glasgow Face Matching Test. Behav. Res. Methods 2010, 42, 286–291. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Bindemann, M.; Avetisyan, M.; Rakow, T. Who Can Recognize Unfamiliar Faces? Individual Differences and Observer Consistency in Person Identification. J. Exp. Psychol. Appl. 2012, 18, 277–291. [Google Scholar] [CrossRef] [PubMed]
  23. Bindemann, M.; Sandford, A. Me, Myself, and I: Different Recognition Rates for Three Photo-IDs of the Same Person. Perception 2011, 40, 625–627. [Google Scholar] [CrossRef] [PubMed]
  24. Jenkins, R.; White, D.; Van Montfort, X.; Mike Burton, A. Variability in Photos of the Same Face. Cognition 2011, 121, 313–323. [Google Scholar] [CrossRef] [PubMed]
  25. Kemp, R.I.; Towell, N.; Pike, G. When Seeing Should Not Be Believing: Photographs, Credit Cards and Fraud. Appl. Cogn. Psychol. 1997, 11, 211–222. [Google Scholar] [CrossRef]
  26. White, D.; Kemp, R.I.; Jenkins, R.; Matheson, M.; Burton, A.M. Passport Officers’ Errors in Face Matching. PLoS ONE 2014, 9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Davis, J.P.; Forrest, C.; Treml, F.; Jansari, A. Identification from CCTV: Assessing Police Super-Recogniser Ability to Spot Faces in a Crowd and Susceptibility to Change Blindness. Appl. Cogn. Psychol. 2018, 32, 337–353. [Google Scholar] [CrossRef]
  28. White, D.; Dunn, J.D.; Schmid, A.C.; Kemp, R.I. Error Rates in Users of Automatic Face Recognition Software. PLoS ONE 2015, 10, 1–14. [Google Scholar] [CrossRef] [PubMed]
  29. Wirth, B.E.; Carbon, C.C. An Easy Game for Frauds? Effects of Professional Experience and Time Pressure on Passport-Matching Performance. J. Exp. Psychol. Appl. 2017, 23, 138–157. [Google Scholar] [CrossRef] [PubMed]
  30. Henderson, Z.; Bruce, V.; Burton, A.M. Matching the Faces of Robbers Captured on Video. Appl. Cogn. Psychol. 2001, 15, 445–464. [Google Scholar] [CrossRef]
  31. Bindemann, M.; Attard, J.; Leach, A.; Johnston, R.A. The Effect of Image Pixelation on Unfamiliar Face Matching. Appl. Cogn. Psychol. 2013, 27, 707–717. [Google Scholar] [CrossRef]
  32. Lander, K.; Bruce, V.; Hill, H. Evaluating the Effectiveness of Pixelation and Blurring on Masking the Identity of Familiar Faces. Appl. Cogn. Psychol. 2001, 15, 101–116. [Google Scholar] [CrossRef]
  33. Estudillo, A.J.; Bindemann, M. Generalization across View in Face Memory and Face Matching. Iperception 2014, 5, 589–601. [Google Scholar] [CrossRef] [PubMed]
  34. Hill, H.; Bruce, V. Effects of Lighting on the Perception of Facial Surfaces. J. Exp. Psychol. Hum. Percept. Perform. 1996, 22, 986–1004. [Google Scholar] [CrossRef] [PubMed]
  35. Kramer, R.S.S.; Reynolds, M.G. Unfamiliar Face Matching With Frontal and Profile Views. Perception 2018, 47, 414–431. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Megreya, A.M.; Sandford, A.; Burton, A.M. Matching Face Images Raken on the Same Day or Months Apart: The Limitations of Photo ID. Appl. Cogn. Psychol. 2013, 27, 700–706. [Google Scholar] [CrossRef]
  37. Noyes, E.; Jenkins, R. Camera-to-Subject Distance Affects Face Configuration and Perceived Identity. Cognition 2017, 165, 97–104. [Google Scholar] [CrossRef] [PubMed]
  38. Özbek, M.; Bindemann, M. Exploring the Time Course of Face Matching: Temporal Constraints Impair Unfamiliar Face Identification under Temporally Unconstrained Viewing. Vis. Res. 2011, 51, 2145–2155. [Google Scholar] [CrossRef] [PubMed]
  39. O’Toole, A.J.; Phillips, P.J.; Jiang, F.; Ayyad, J.; Pénard, N.; Abdi, H. Face Recognition Algorithms Surpass Humans Matching Faces over Changes in Illumination. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1642–1646. [Google Scholar] [CrossRef] [PubMed]
  40. Bindemann, M.; Avetisyan, M.; Blackwell, K.A. Finding Needles in Haystacks: Identity Mismatch Frequency and Facial Identity Verification. J. Exp. Psychol. Appl. 2010, 16, 378–386. [Google Scholar] [CrossRef] [PubMed]
  41. Bindemann, M.; Fysh, M.; Cross, K.; Watts, R. Matching Faces against the Clock. Iperception 2016, 7, 1–18. [Google Scholar] [CrossRef] [PubMed]
  42. Fysh, M.C.; Bindemann, M. Effects of Time Pressure and Time Passage on Face-Matching Accuracy. R. Soc. Open Sci. 2017, 4, 170249. [Google Scholar] [CrossRef] [PubMed]
  43. Hahn, C.A.; O’Toole, A.J.; Phillips, P.J. Dissecting the Time Course of Person Recognition in Natural Viewing Environments. Br. J. Psychol. 2016, 107, 117–134. [Google Scholar] [CrossRef] [PubMed]
  44. Rice, A.; Phillips, P.J.; Natu, V.; An, X.; O’Toole, A.J. Unaware Person Recognition From the Body When Face Identification Fails. Psychol. Sci. 2013, 24, 2235–2243. [Google Scholar] [CrossRef] [PubMed]
  45. Bruce, V.; Henderson, Z.; Newman, C.; Burton, A.M. Matching Identities of Familiar and Unfamiliar Faces Caught on CCTV Images. J. Exp. Psychol. Appl. 2001, 7, 207–218. [Google Scholar] [CrossRef] [PubMed]
  46. Davis, J.P.; Valentine, T. CCTV on Trial: Matching Video Images with the Defendant in the Dock. Appl. Cogn. Psychol. 2009, 23, 482–505. [Google Scholar] [CrossRef]
  47. Bindemann, M.; Fysh, M.C.; Sage, S.S.K.; Douglas, K.; Tummon, H.M. Person Identification from Aerial Footage by a Remote-Controlled Drone. Sci. Rep. 2017, 7, 1–10. [Google Scholar] [CrossRef] [PubMed]
  48. Robertson, D.J.; Noyes, E.; Dowsett, A.J.; Jenkins, R.; Burton, A.M. Face Recognition by Metropolitan Police Super-Recognisers. PLoS ONE 2016, 11, 1–8. [Google Scholar] [CrossRef] [PubMed]
  49. Camber, R. Take off for Police Drones Air Force: Remote-Controlled “Flying” Squad to Chase Criminals and Hunt for Missing People. Available online: http://www.dailymail.co.uk/news/article-4329714/Remote-controlled-flying-squad-chase-criminals.html (accessed on 10 August 2018).
  50. Megreya, A.M.; Burton, A.M. Matching Faces to Photographs: Poor Performance in Eyewitness Memory (Without the Memory). J. Exp. Psychol. Appl. 2008, 14, 364–372. [Google Scholar] [CrossRef] [PubMed]
  51. Layne, R.; Hospedales, T.M.; Gong, S. Investigating Open-World Person Re-Identification Using a Drone. In Computer Vision—ECCV 2014 Workshops; Agapito, L., Bronstein, M., Rother, C., Eds.; Springer: Cham, Switzerland, 2014; pp. 225–240. [Google Scholar]
  52. Bondi, E.; Fang, F.; Hamilton, M.; Kar, D.; Dmello, D.; Choi, J.; Hannaford, R.; Iyer, A.; Joppa, L.; Tambe, M.; et al. SPOT Poachers in Action: Augmenting Conservation Drones With Automatic Detection in Near Real Time. In Proceedings of the Thirtieth Conference on Innovative Applications of Artificial Intelligence (IAAI-18), New Orleans, LA, USA, 2–7 February 2018; pp. 7741–7746. [Google Scholar]
  53. Nousi, P.; Tefas, A. Discriminatively Trained Autoencoders for Fast and Accurate Face Recognition. In Communications in Computer and Information Science; Boracchi, G., Iliadis, L., Jayne, C., Likas, A., Eds.; Springer: Cham, Switzerland, 2017; Volume 744, pp. 205–215. [Google Scholar]
  54. Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep Face Recognition. Proc. Br. Mach. Vis. Conf. 2015, 1, 41.1–41.12. [Google Scholar] [CrossRef]
  55. Phillips, P.J.; Scruggs, W.T.; O’Toole, A.J.; Flynn, P.J.; Bowyer, K.W.; Schott, C.L.; Sharpe, M. FRVT 2006 and ICE 2006 Large-Scale Experimental Results. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 831–846. [Google Scholar] [CrossRef] [PubMed]
  56. Phillips, P.J.; Yates, A.N.; Hu, Y.; Hahn, C.A.; Noyes, E.; Jackson, K.; Cavazos, J.G.; Jeckeln, G.; Ranjan, R.; Sankaranarayanan, S.; et al. Face Recognition Accuracy of Forensic Examiners, Superrecognizers, and Face Recognition Algorithms. Proc. Natl. Acad. Sci. USA 2018, 115, 6171–6176. [Google Scholar] [CrossRef] [PubMed]
  57. White, D.; Jonathon Phillips, P.; Hahn, C.A.; Hill, M.; O’Toole, A.J. Perceptual Expertise in Forensic Facial Image Comparison. Proc. R. Soc. B Biol. Sci. 2015, 282. [Google Scholar] [CrossRef] [PubMed]
  58. Fysh, M.C.; Bindemann, M. Human-Computer Interaction in Face Matching. Cogn. Sci. 2018, 42, 1714–1732. [Google Scholar] [CrossRef] [PubMed]
  59. Bindemann, M.; Brown, C.; Koyas, T.; Russ, A. Individual Differences in Face Identification Postdict Eyewitness Accuracy. J. Appl. Res. Mem. Cogn. 2012, 1, 96–103. [Google Scholar] [CrossRef]
  60. Fysh, M.C. Individual Differences in the Detection, Matching and Memory of Faces. Cogn. Res. Princ. Implic. 2018, 3, 20. [Google Scholar] [CrossRef] [PubMed]
  61. Robertson, D.J.; Jenkins, R.; Burton, A.M. Face Detection Dissociates from Face Identification. Vis. Cogn. 2017, 25, 740–748. [Google Scholar] [CrossRef]
  62. Russell, R.; Duchaine, B.; Nakayama, K. Super-Recognizers: People with Extraordinary Face Recognition Ability. Psychon. Bull. Rev. 2009, 16, 252–257. [Google Scholar] [CrossRef] [PubMed]
  63. Bobak, A.K.; Dowsett, A.J.; Bate, S. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills. PLoS ONE 2016, 11, 1–13. [Google Scholar] [CrossRef] [PubMed]
  64. Davis, J.P.; Lander, K.; Evans, R.; Jansari, A. Investigating Predictors of Superior Face Recognition Ability in Police Super-Recognisers. Appl. Cogn. Psychol. 2016, 30, 827–840. [Google Scholar] [CrossRef]
  65. Papesh, M.H. Photo ID Verification Remains Challenging despite Years of Practice. Cogn. Res. Princ. Implic. 2018, 3, 19. [Google Scholar] [CrossRef] [PubMed]
Figure 1. An example pair of faces from the Glasgow Face Matching Test (GFMT) [21]. In the top row, two different images of the same person are presented (i.e., an identity match), whereas the bottom pair depicts two different people (i.e., an identity mismatch).
Figure 1. An example pair of faces from the Glasgow Face Matching Test (GFMT) [21]. In the top row, two different images of the same person are presented (i.e., an identity match), whereas the bottom pair depicts two different people (i.e., an identity mismatch).
Drones 02 00032 g001
Figure 2. The quality of drone-recorded footage is susceptible to a number of factors, such as the height of the drone itself, as well as ambient conditions. In addition, such footage might be further hindered by obstacles at ground level such as trees and bystanders, as well as the movement of the target themselves.
Figure 2. The quality of drone-recorded footage is susceptible to a number of factors, such as the height of the drone itself, as well as ambient conditions. In addition, such footage might be further hindered by obstacles at ground level such as trees and bystanders, as well as the movement of the target themselves.
Drones 02 00032 g002
Figure 3. An aerial view from the perspective of the drone employed by Bindemann et al. [47].
Figure 3. An aerial view from the perspective of the drone employed by Bindemann et al. [47].
Drones 02 00032 g003
Figure 4. Example stimuli used by Bindemann et al. [47]. The left panel represents an identity match, whereby the high-quality digital photograph shows the same person as depicted in the three images extracted from the drone camera that are shown underneath. Conversely, the right panel depicts an identity mismatch, whereby the high-quality digital photograph depicts a different person to the one shown in the three images below.
Figure 4. Example stimuli used by Bindemann et al. [47]. The left panel represents an identity match, whereby the high-quality digital photograph shows the same person as depicted in the three images extracted from the drone camera that are shown underneath. Conversely, the right panel depicts an identity mismatch, whereby the high-quality digital photograph depicts a different person to the one shown in the three images below.
Drones 02 00032 g004

Share and Cite

MDPI and ACS Style

Fysh, M.C.; Bindemann, M. Person Identification from Drones by Humans: Insights from Cognitive Psychology. Drones 2018, 2, 32. https://doi.org/10.3390/drones2040032

AMA Style

Fysh MC, Bindemann M. Person Identification from Drones by Humans: Insights from Cognitive Psychology. Drones. 2018; 2(4):32. https://doi.org/10.3390/drones2040032

Chicago/Turabian Style

Fysh, Matthew C., and Markus Bindemann. 2018. "Person Identification from Drones by Humans: Insights from Cognitive Psychology" Drones 2, no. 4: 32. https://doi.org/10.3390/drones2040032

Article Metrics

Back to TopTop