Next Article in Journal
No-Reference Image Quality Assessment with Global Statistical Features
Previous Article in Journal
Inspection of Transparent Objects with Varying Light Scattering Using a Frangi Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Noncontact Sensing of Contagion

1
School of Engineering, University of South Australia, Mawson Lakes Campus, Adelaide, SA 5095, Australia
2
School of Biomedical Sciences and Pharmacy, University of Newcastle, Newcastle, NSW 2308, Australia
3
The Chahl Medical Practice, P.O. Box 2300, Dangar, NSW 2309, Australia
4
Electrical Engineering Technical College, Middle Technical University, Al Doura, Baghdad 10022, Iraq
5
Joint and Operations Analysis Division, Defence Science and Technology Group, Melbourne, VIC 3207, Australia
*
Author to whom correspondence should be addressed.
J. Imaging 2021, 7(2), 28; https://doi.org/10.3390/jimaging7020028
Submission received: 28 October 2020 / Revised: 2 February 2021 / Accepted: 2 February 2021 / Published: 5 February 2021

Abstract

:
The World Health Organization (WHO) has declared COVID-19 a pandemic. We review and reduce the clinical literature on diagnosis of COVID-19 through symptoms that might be remotely detected as of early May 2020. Vital signs associated with respiratory distress and fever, coughing, and visible infections have been reported. Fever screening by temperature monitoring is currently popular. However, improved noncontact detection is sought. Vital signs including heart rate and respiratory rate are affected by the condition. Cough, fatigue, and visible infections are also reported as common symptoms. There are non-contact methods for measuring vital signs remotely that have been shown to have acceptable accuracy, reliability, and practicality in some settings. Each has its pros and cons and may perform well in some challenges but be inadequate in others. Our review shows that visible spectrum and thermal spectrum cameras offer the best options for truly noncontact sensing of those studied to date, thermal cameras due to their potential to measure all likely symptoms on a single camera, especially temperature, and video cameras due to their availability, cost, adaptability, and compatibility. Substantial supply chain disruptions during the pandemic and the widespread nature of the problem means that cost-effectiveness and availability are important considerations.

1. Introduction

COVID-19 cases emerged in late December 2019 as an outbreak of viral pneumonia with cough, fever, and fatigue [1]. It has since spread rapidly throughout the world with hundreds of thousands of reported cases and is now a major global public health threat [2]. It was declared a pandemic on 11 March 2020, by the World Health Organization (WHO) [3]. As of late March 2020, the number of cases worldwide was more than a million and the average death rate was 4.5%, with considerable variation between countries depending on the availability of medical services and public health measures [4].
The causative agent of COVID-19 was identified as coronavirus 2 (2019-nCoV) [5], later designated SARS-CoV-2 [6]. Coronaviruses are non-segmented, positive-stranded ribonucleic acid (RNA) viruses surrounded by a protein envelope with characteristic crown-like spikes on their surfaces. The coronaviruses that infect humans by cross-species transmission pose a major threat to human health and have caused two serious outbreaks of acute respiratory syndrome in the past two decades, severe acute respiratory syndrome (SARS) in 2003, and Middle East respiratory syndrome (MERS) in 2012.

2. Epidemiology

An understanding of the epidemiology of COVID-19 is evolving as the pandemic progresses and more information should become available in the coming months. Although the early cases had some contact with the seafood (wet) market, it soon became clear that human-to-human transmission occurred [7]. COVID-19 has been found to have higher levels of transmissibility and pandemic risk than SARS [8]. Guidelines from Chinese health authorities in early February 2020 stated an average incubation duration of seven days, ranging from two to 14 days [7].
It also became clear early in the epidemic that the elderly were more susceptible, whereas cases in persons under 18 years of age were few. Pediatric patients have been found to have only mild to moderate symptoms of COVID-19, and many children were asymptomatic [9]. A further study confirmed these findings and proposed that children could be facilitators of transmission of COVID-19 [10]. The authors further suggested that there is an urgent need to investigate the role of children in the transmission of COVID-19.

3. Clinical Features of COVID-19

The most common early symptoms and signs of COVID-19 include fever, dry cough, sore throat, and fatigue. Less common symptoms are diarrhea, muscle aches, and headache, indicating that multiple systems are involved. In a proportion of patients, reportedly about 39% [1], the disease progresses to shortness of breath and acute respiratory distress syndrome (ARDS), requiring mechanical ventilation. Multiple organ failure causing death occurs in a small proportion of patients.
A study of 191 patients from two hospitals [11] showed increasing probability of death was associated with older age, higher Sequential Organ Failure Assessment (SOFA) scores (used to determine the rate of organ failure), and levels of d-dimer (fibrin degradation product, indicative of thrombosis) greater than 1 µg/mL. Higher levels of comorbidities such as hypertension, diabetes, coronary heart disease, and chronic obstructive lung disease were found in non-surviving subjects [11]. Development of heart failure has been identified as a particular risk in those patients with underlying cardiovascular disease [12]. Lower mortality has been reported by early detection and intervention for those patients at risk of serious illness [13].
The particular vulnerability of the cardiovascular system may be related to the ability of the spike protein of the coronaviruses to bind to angiotensin converting enzyme 2 (ACE2), a transmembrane aminopeptidase that is highly expressed in the heart and lungs and plays a vital role in the cardiovascular and immune systems [14]. The finding that ACE2 is a functional receptor for coronaviruses and acts as an entry point for the virus into cells and the fact that ACE2 is elevated in patients with cardiovascular disease [15] may explain the greater vulnerability of these patients.
As yet, there is no known effective treatment for COVID-19 [16]. The current treatment of patients is symptomatic, aimed mainly at supporting the cardiopulmonary system. The effect of a combination of the antiviral drugs, lopinavir and ritonavir, which was found to result in less adverse clinical outcomes during the SARS epidemic, is currently under trial [17]. However, current research is primarily directed at finding a vaccine to prevent infection with COVID-19.
The long-term effects of COVID-19 remain unknown. However, in a 12 year follow-up study, subjects who recovered from SARS were found to have chronic cardiovascular damage and disorders of lipid and glucose metabolism [18]. The similar structure of the coronaviruses causing SARS and COVID-19 raises the possibility of chronic cardiovascular damage in patients with COVID-19 and the importance of cardiovascular protection during treatment of patients [18].

4. Diagnosis of COVID-19

The first definitive test for COVID-19 was Reverse Transcriptase Polymerase Chain Reaction (RT-PCR), which detects SARS-CoV-2 nucleic acid (RNA). This test was developed very early after identification of the causative virus. However, RT-PCR takes time and requires specialized laboratories and trained staff. Recently, serology antibody tests have been developed that detect IgG and/or IgM antibodies to SARS-CoV-2. These tests use finger-prick blood samples and provide results in 15–30 min. Their limitation is that it takes several days after an individual becomes infected for the antibodies to become detectable. Since COVID-19 is an emerging viral infectious disease, there is limited evidence currently available to assess the accuracy of these new tests [19].
Comparison of serial RT-PCR assays and chest computer tomography (CT) scan results obtained from patients with suspected COVID-19 showed that chest CT scans had a higher sensitivity than RT-PCR for diagnosis of COVID-19 [20]. Therefore, in the hospital setting, CT scans would be most useful in early diagnosis of suspected cases.

5. Symptoms of COVID-19 Detectable by Noncontact Sensors

The highly contagious nature of COVID-19 and the ongoing problem of detecting the signs of the infection in large groups of people such as travelers makes the rapid detection of the signs by noncontact sensors an important goal. The early clinical signs of COVID-19 are fever, cough, and fatigue. Later signs that have been reported in hospitalized patients are increased heart rate, respiratory rate, and ocular signs. Those infected with COVID-19 who are seriously ill with inadequate oxygenation may have cyanosis, which is exhibited as a blue color of the body and visible mucosa [21]. Fever, cough, fatigue, ocular signs, and increased heart rate and respiratory rate are discussed further below.

5.1. Fever

Normal body temperature varies between individuals and with age, activity, and time of day. The average normal body temperature is generally accepted as 37 °C, although it has a wide range from 36.1 to 37.2 °C. Fever is commonly accepted as a body temperature over 38 °C, indicating infection or illness [22]. A recent study of hospitalized patients with COVID-19 showed that 98% of patients had temperatures above 37.3 °C and 78% had temperatures above 38 °C [17].
The presence of elevated body temperature (fever) is the early sign that is currently widely used as a screening method for detecting COVID-19 infection in groups of people such as travelers. Early advice from the WHO was that body temperature screening detected the majority of cases of COVID-19 [23].
Many different thermometry methods are used to detect fever in clinically ill subjects, including oral, rectal, axillary, auditory meatus, and tympanic membrane measurements. The method commonly used for non-invasive, rapid screening of large numbers of people is forehead infrared body temperature measurement. A study conducted using data obtained during the SARS epidemic in Taiwan, comparing infrared body thermometry with measurements obtained with a tympanic thermometer, showed that measurements of temperature from the auditory meatus but not the forehead correlated with tympanic temperature measurements [8]. A much higher rate of false-negative results was also reported for forehead measurements compared with auditory meatus measurements [8]. Despite the limitations of forehead infrared thermometry, it remains the most widely used screening method.
The adequacy of body temperature measurement as a screening method for detection of COVID-19 has been questioned, not only on methodological grounds, but also as a result of epidemiological studies in which cases of asymptomatic subjects transmitting the disease have been suspected [24]. The possibility of asymptomatic and presymptomatic transmission has recently been studied [25]. From the data available to date, the authors concluded that, based on serial intervals defined as the time duration between a primary case developing symptoms and a secondary case developing symptoms, it was likely that presymptomatic transmission occurred in about 10% of cases of COVID-19 [25]. The role of the asymptomatic transmission is less clear, but there is some evidence that this might also occur [25,26].
Thus, there are two major limitations to the use of fever for the detection of COVID-19, the limitations regarding the detection of elevated body temperature, and the possibility of asymptomatic subjects. As was suggested following the SARS epidemic [8], screening for other signs such as cough and cardiopulmonary changes would be necessary to reduce the risk of non-detection of COVID-19 cases.

5.2. Cough

A study of forty-one hospitalized patients with COVID-19 showed that 76% had cough [17]. Subjects with cough are highly likely to spread infection, and therefore it is important to screen for cough in groups of people. Coughing involves a characteristic explosive sound and movement of the rib cage that might be sensed remotely.

5.3. Fatigue

Fatigue is a subjective sensation that might not be detectable by sensors until subjects are more severely ill and exhibit objective signs. In these subjects, multiple facial morphological features such as those reported by Li et al. [27] might be used to detect fatigue.

5.4. Ocular Signs

Ocular signs of irritated eyes including conjunctival hyperemia, swollen eyes, and increased tear production have been reported in about 30% of patients with diagnosed COVID-19 [28]. Evidence of irritated eyes, such as rubbing of the eyes, might be detected by sensors in a fraction of subjects early in the disease.

5.5. Respiratory Rate and Heart Rate

The normal respiratory rate for adults at rest ranges from 14 to 20 breaths per minute. It has been reported that respiratory rate is a commonly neglected vital sign, which may be a predictor of clinical outcome [29]. Recent evidence suggests that a subject with a respiratory rate greater than 20 breaths per minute is probably unwell, and a subject with a respiratory rate greater than 24 breaths per minute is likely to be critically ill [29].
Dyspnea is a subjective sensation, also known as shortness of breath. A common symptom of COVID-19 is dyspnea, which may manifest as an increase in respiratory rate and heart rate. In an early study on hospitalized patients with COVID-19, 55% had dyspnea, and about half of those had a respiratory rate greater than 24 breaths per minute [17]. In a further study of 44,672 diagnosed cases of COVID-19, 14% had dyspnea and a respiratory rate greater than 30 breaths per minute [30].
The mean normal heart rate for adults is 72 beats per min. In a study on the effects of acute febrile infection on cardiac rhythm in young adult males, it was shown that a body temperature rise of 1 °C caused a heart rate increase on an average of 8.5 beats per minute [31].
Respiratory rate and heart rate are vital signs that would prove useful in screening for subjects with suspected COVID-19.

6. Technology Used for Measuring Vital Signs

Human vital signs such as temperature, breathing rate (BR), heart rate (HR), heart rate variability (HRV), blood oxygen saturation (SpO2), and blood pressure (BP) indicate human state of health to a large extent [32,33]. COVID-19 effects on vital signs, as shown above, are significant on cardiorespiratory state and temperature. Certain other characteristics such as coughing and conjunctivitis are clearly detectable by video camera under the right circumstances, while being difficult to detect remotely using non-imaging technology. Coughing can be detected acoustically. This review focuses on symptoms that might be detected using noncontact sensing technology which are the internal parameters of temperature and cardiorespiratory rates.
There are various non-contact methods for measuring vital signs remotely, including by magnetic induction, the Doppler effect with radar or sonar, video camera imaging, and thermal imaging [34]. These techniques have been shown to be effective in remotely monitoring vital signs with acceptable accuracy, reliability, and sometimes practicality [34]. Each method mentioned above has its pros and cons and may outperform in some challenges but be inferior in others. Doppler radar is highly affected by motion artefacts, can have biological effects on humans, and is not suitable for monitoring multiple subjects simultaneously. Visible spectrum and thermal spectrum cameras offer the best options of those studied, thermal cameras due to their potential to measure all likely symptoms on a single camera, especially temperature, and video cameras due to their availability, cost, adaptability, and compatibility.

6.1. Thermal Imaging Technology

Infrared thermography (IRT), also known as thermal imaging, is a promising monitoring and diagnostic technique in the medical field [35]. For example, thermal imaging can be used to assess HR and BR, endocrinological disorders (especially diabetes mellitus), vascular disorders, neurological disorders, musculoskeletal disorders, oncology, regenerative medicine, and surgery. Additionally, a popular application of thermal imaging is screening persons with fever at airports, schools, hospitals, etc. Therefore, IRT is a passive, non-contact monitoring technique, which senses the radiation naturally emitted from an object such as human skin. Thermal imaging does not require an illuminating radiation source or a dedicated light source. These are the most significant advantages of thermal imaging over other non-contact techniques. They are not affected by illumination variation, can work in darkness, and are difficult to falsify using makeup or masks [36].
Thermal cameras respond to wavelengths starting at around 3 micrometers (µm) compared to visible wavelengths, ending at about 0.75 µm. An array of thermally sensitive elements behind a specified lens produces an image much like an optical camera, with higher radiation measured from hotter objects. This leads to a grey image, with lighter areas being warmer rather than darker. Depending on the application, this scale might be reversed for human operators. The use of “false color” is common in thermal images and thermal video, because it allows for increased ability for human operators to discern temperature. Colder temperatures are often assigned a shade of purple, blue, or green, whereas hotter temperatures can be given a shade of orange, red, or yellow [37]. Figure 1 shows a thermal image captured by a thermal camera where the person is covered in shades of orange and yellow, whereas other areas are blue and purple. That is because he is radiating more heat than surrounding objects. False color is purely a remapping of grey levels and is not a feature of the underlying thermal images.
Thermal cameras work on the principle that all objects that have a temperature above absolute zero (−273.15 °C) emit electromagnetic energy, also known as infrared radiation or thermal radiation [38].
To accurately observe the temperature of a target using a thermal camera, the target should be an ideal black body. A black body in thermal equilibrium will absorb all incident electromagnetic radiation and emit an equal amount of energy through isotropic radiation. The distribution in wavelength of the emitted radiation is governed by the temperature of the body; thus, as the temperature increases, the distribution shifts towards shorter wavelengths. This phenomenon is described by Planck’s law. A thermal camera measures the predominant wavelength of the radiation emitted by a body. The human body is not an ideal Plank black body radiator but is a close approximation in some bands [38]. It is generally accepted that the emissivity of human skin in the IR range of 3–14 µm is 0.98 ± 0.01 [38], which, while close to that of a black body, can cause disturbances due to non-ideal radiation emission and reflections. Thus, there will always be some degree of ambiguity when measuring skin temperature through thermal imaging. In a controlled environment, this can be accounted for with calibration. In uncontrolled environments, however, it is significantly more difficult to calibrate.
This is just one of many limitations that arise when using thermal cameras for the detection of elevated body temperature. Infrared camera manufacturers, such as FLIR [39], state that such technology can only be used to detect skin temperature (as opposed to core body temperature) and must be operated in a controlled environment to do so. The temperature must be in range of 20 to 24 °C, and the relative humidity should be in the range from 10% and 50%. The temperature measured on the skin’s surface is offset from the subject’s core body temperature; hence, measurements must be calibrated to be indicative of body temperature. Environmental, operational, and subjective factors such as convective airflow, reflective surfaces, IR contamination (from sources such as sunlight and heaters for example), the sweat produced by the subject, wearable accessories covering some part of the face (such as glasses or baseball caps), ambient temperature, humidity, and emissivity all have an effect on the acquired data. Thus, thermal imaging may be used to determine anomalies relative to other subjects; however, there are many factors that prevent determination of the absolute core body temperature of a given subject from range. It is suggested that the area medially adjacent to the inner canthus of the eye provides the most consistent measurement, as determined with mass screening [40].

6.1.1. Body Temperature Measured with Thermal Camera

In the literature, thermal cameras have been used by numerous researchers to extract body temperature, as shown in Table 1. For example, Bilodeau et al. [41] proposed a non-contact method to measure the body temperature of a moving animal or human in a laboratory setting based on IRT using a thermal camera. As shown in Figure 2, a region of interest (ROI) was selected manually and tracked automatically with a particle filter. A Raw temperature signal was measured from extracted skin pixels. To minimize camera measurement noise, a Kalman filter was applied to the raw signal. However, in the proposed method, the ROI was selected manually.
Aubakir et al. [42] used a long wavelength infrared (LWIR) camera to monitor body temperature based on an automatic face detection using the Viola–Jones (V–J) algorithm and ROI averaging. It is noted that this method of ROI averaging is prone to perturbances due to environmental factors. Another study by Sharma et al. [43] proposed a contactless approach to measure temperature using the Viola–Jones face detection algorithm. Authors used an algorithm to select the best frame containing all the features of the face rather than computing the temperature of the face in each frame and then calculated the temperature for the same frame. However, the Viola–Jones face detection method was sensitive to the subject’s head movement.
Lin et al. [44] presented a non-contact, automatic continuous body temperature measurement (CBTM) system using a thermal camera. In the proposed method depicted in Figure 3, face detection was implemented using a neural network, and a kernel correlation filter (KCF) tracker was used to track the face region across each frame. Nevertheless, in the proposed method, participants were advised to sit and remain stationary, making the system not applicable for real-time situations.
Sumriddetchkajorn et al. [45] introduced a parallel measurement technique to screen human temperatures that is applicable to large public areas by combining infrared technology, image processing techniques, and human flow management. Through image filtering, morphological operations, and particle filtering, large numbers of subjects were able to be processed efficiently. With 100% sensitivity and 36.4% specificity, however, this approach requires refinement.
Using an infrared thermal camera, Silawan et al. [46] proposed a novel idea to estimate core body temperature. The method enhances the sensitivity and specificity of a fever screening system considering various environmental situations (Figure 4).
A pilot study by Thomas et al. [47] explored the use of an IR camera for measuring the core body temperature of workers in hot work environments. The strong correlation between the measured and reference data showed potential for the infrared camera as a non-invasive alternative to the thermometer pill and rectal thermometer.

6.1.2. HR and BR Measured with Thermal Camera

To assess a human’s health condition, especially to identify a virus-infected patient, measuring body temperature alone is not sufficient. Other vital signs, such as breathing rate and heart rate, should be considered as well. Thermal imaging techniques can detect two physiological phenomena caused by cardiorespiratory activity. The first phenomenon related to cardiac activity is the slight heat differences produced by the pulsating blood flow in the major superficial arteries in particular anatomical regions. A thermal camera can detect these heat differences to extract the pulse signal. The heat difference due to the pulsating blood flow is approximately 0.08 K, which is far less than the temperature of normal skin (310 K) from a black-body perspective [48]. The second phenomenon related to breathing activity is the small temperature differences around the nostrils of approximately 0.1 K [49] due to the heat transfer from warm exhaled and cold inhaled lung contents.
A summary of studies in the literature where thermal cameras were used by researchers to monitor the breathing rate and heart rate is shown in Table 2. Murthy et al. [50] proposed a noncontact technique to monitor breathing rate using thermal imaging based on a statistical methodology that modelled breathing as a combination of expiration and non-expiration distributions. Fei et al. [51,52] enhanced the thermal imaging technique by using an optical bandpass filter and Fourier analysis based on the fast Fourier transform (FFT). In these studies, the ROI selection was done manually, and there was not sufficient compensation for motion.
Sun et al. [53] presented a novel non-contact technique for measuring a cardiovascular pulse using thermal imaging. The algorithm includes motion tracking, manual ROI selection, FFT, and adaptive estimation to measure cardiac pulse (Figure 5). However, in this study, the subject sample size was very small. Garbey et al. [54] further elaborated the Sun et al. method by increasing the sample size from five to 32 subjects and recording time from two minutes to five minutes. However, both methods are highly affected by the movement of subjects and the manual selection of ROI.
Chekmenev et al. [55] introduced a novel non-contact method to remotely measure HR and BR based on wavelet analysis using continuous wavelet transform (CWT), which gave a better performance than FFT. However, they only considered four subjects to validate their method, and the ROI selection was manual. In another study, Fei et al. [56] introduced a new and enhanced technique to monitor BR based on automatic tracking/localization of ROI using a coalitional tracking algorithm and wavelet analysis based on CWT. Using CWT, not only the breathing rate, but also the full breathing waveform was recovered.
Shakhih et al. [57] presented an approach to measure the time of the inspiration (TI) and expiration (TE) considering three different breathing patterns that were simultaneously captured by infrared thermal imaging (ITI) and respiratory inductive plethysmography (RIP). However, the method was highly susceptible to head movement and image processing was also not sufficient.
Pereira et al. [58] presented a new robust method to remotely measure BR by using thermal imaging captured by an LWIR camera. As shown in Figure 6, in the proposed method, the ROI and region of measurement (ROM) were identified automatically, and then tracked using a particle filter framework. A Butterworth bandpass filter was applied to the raw signal. Finally, three estimators were calculated to compute instantaneous breathing frequencies. However, the authors only considered BR.
The authors [59] proposed a non-contact method to monitor cardiorespiratory signals using thermal imaging. BR was extracted incorporating the same algorithm as [58]. As shown in Figure 7, feature points were detected and tracked within the manually selected ROI. Temporal filtering, principal component analysis (PCA), and peak detection were used to estimate HR.
Most of the above-discussed methods used thermal information around the nostril area, which is not suitable for long term monitoring, because it can happen that the nose is not in the field of view of the thermal camera. To deal with this issue, Pereira et al. [60] presented an enhanced and robust method to estimate BR using thermal imaging, considering both the temperature difference around the nostril area and mouth and the movement of the shoulders. In the proposed algorithm, four ROIs were detected automatically and tracked using a particle filter framework. Only five subjects were considered in this study.
Most of the works discussed above considered only adults as their subjects to monitor vital signs. However, a few works did research on infants as well. For example, Abbas et al. [61] first used infrared thermography to monitor the breathing rate of neonates in the neonatal intensive care unit (NICU) using an infrared camera set up shown in Figure 8 and an algorithm based on CWT. However, they did not consider a tracking algorithm, and the sample size was small. Pereira et al. [62] proposed a feasible approach based on IRT to monitor the BR of preterm babies incorporating the same algorithm as [58]. However, the sample size was very small, as they considered only four infants. In [63], the authors presented a “black-box” or grid-based approach algorithm to remotely monitor the BR of neonates in the NICU using an LWIR camera. However, this method is highly affected by motion artefacts.

6.1.3. Image Processing Techniques Associated to Thermal Imaging

Thermal imaging systems use several image processing techniques to extract the required image details from the thermal images. These can range from low level image processing to high-level deep learning-based techniques. We gather the widely used methods for image analysis in thermal imaging.
Having a reliable object detection model is very important for the prediction of objects on thermal images. For detection in thermal infrared, hotspot detection or thresholding has historically been the major approach [64]. Thresholding combined with post-processing (e.g., merging and splitting of blobs) is an efficient detection technique in the case of high background/object contrast, a situation more or less common depending on the application [65,66].
In most cases in the literature for thermal infrared images, ROIs are characterized manually because of the absence of reliable face and facial landmark detection algorithms. In other cases, ROIs are defined semi-automatically or automatically; however, this is done with the condition that the full-frontal view or a certain image area of the face is given [67]. Some detection methods exploit the advantages of the visual and thermal modalities, respectively, by combining information extracted from visual and thermal imagery of the same scene [68,69,70].
Recent advances in deep convolutional neural networks have helped enable sophisticated facial or human body detection and recognition systems, which prove valuable in surveillance and security systems applications [69]. Existing state-of-the-art facial recognition systems have demonstrated high-performance accuracy for automatic object detection and identification/recognition tasks [71,72,73,74,75].
Although there has been great success in object detection and recognition systems in the visible light domain [71,73,76], in other domains like the thermal domain, considerably less work has been done to accomplish such performance. Few notable methods used deep learning methods originally developed for RBG images [75], developing new feature detectors for thermal images [77], thermal imaging datasets with multiple labelled environment conditions [78], and thermal imaging-based augmented vision [79].
Object detection methods have progressed significantly over the years from simple contour-based methods using support vector machines (SVM) [77,80] to those using deep classification models [76,81] that utilize hierarchal representations of data [79].
Due to advances in deep learning, a more generalized form of object detection has evolved over time. Convolutional classifiers have replaced the exhaustive search for classification [79]. Object detection models have been proposed to work with relatively good accuracy on the visible spectrum using models that consist of (i) a two-stage system made up of a classifier connected with a region proposal network (RCNN [82]) and (ii) a single stage network, which performs the classification and localization layers in a cohesive space (YOLO [76] and SSD [81]).
Image segmentation partitions images into multiple parts or regions, often based on the characteristics of the pixels in the image. It could involve separating foreground from background, or clustering regions of pixels based on similarities in color or shape.
Because individuals are usually warmer, they appear brighter than the background in thermal images. Thus, a common point to start to extract human candidates is a single threshold. Calculating the threshold is as simple as finding a difference between the maximum and minimum image intensities and taking the average value of that difference [83]. Deep neural networks have been proposed to perform the semantic segmentation in complex scenes recorded by synchronous RGB and thermal cameras [84,85]. However, these networks require a large number of annotated data to train them.
The main objective of image translation is to learn one or multiple mappings between the source domain and the target domain. This can be realized in both supervised [86] and unsupervised [87] ways, or even in semi-supervised ways [88].
Face recognition studies in the visible light domain (VLD) have achieved impressive performances due to the availability of massive datasets. With these great strides, it is imperative to extend the existing VLD based face recognition systems into other less studied domains such as near-infrared imaging (low-light) and thermal imaging (no-light) [72]. The different domains are shown in Figure 9. The VLD domain could not be easily replicated in the thermal domain due to the relatively small amount of training data available and the domain gap between thermal and visible light [72].
In recent times, the use of generative adversarial networks (GAN) [90] in arbitrary image-to-image translation applications has shown encouraging results [91]. Using an antagonistic game approach, GANs significantly increased the quality of image-to-image translation. Pix2pix GAN for geometrically aligned image pair translations was presented by Isola et al. [86]. This work indicated phenomenal performance in arbitrary image-to-image translations. The pix2pix was trained by Zhang et al. [92] to transform a human face in a thermal image to a color image. In a cross-modality setting of thermal to visible range, the quality of a face recognition performance was improved by the translation. While the human face has a generally steady temperature, translating an image from color to thermal image for the entire human body with an arbitrary background is more ambiguous and is therefore constrained by the sequence of happenings for an individual [91]. Another prominent development in this area, TV-GAN [72] translates face thermal images into visible light images using a facial identify loss function, and this work can generate naturally lit face images whilst preserving identity.
With the right thermal imaging camera, the right lens, and following the correct guidelines and standards, thermal imaging systems can be effective as a screening tool. However, this is still a challenging and active research area with a lot of ongoing developments. A thorough explanation on the limitations of thermal imaging can be found in [93]. Here, we summarize a few of them:
  • Although these systems may be in use for initial temperature assessment to triage individuals in high throughput areas (for example, airports, businesses, and sporting events), the systems have not been shown to be effective when used to take the temperature of multiple people at the same time. It is difficult to find stable solutions related to “mass fever screening”.
  • These systems measure surface skin temperature, which is usually lower than a temperature measured orally. Thermal imaging systems must be adjusted properly to correct for this difference in measurements.
  • These systems work effectively only when all the following are true:
    o
    The systems are used in the right environment or location.
    o
    The systems are set up and operated correctly.
    o
    The person being assessed is prepared according to instructions.
    o
    The person handling the thermal imaging system is properly trained.
  • Room temperature should be 68–76 °F (20–24 °C) and relative humidity 10–50 percent.
  • There are items that could impact the temperature measurement:
    o
    Reflective backgrounds (glass, mirrors, and metallic surfaces) could reflect infrared radiation.
    o
    Movement of air in the room, direct sunlight and radiant heat (portable heaters, electrical sources).
    o
    Strong lighting (incandescent, halogen, and quartz tungsten halogen light bulbs).
  • Some systems require the use of a calibrated blackbody (a tool for checking the calibration of an infrared temperature sensor) during evaluation to make sure measurements are accurate.

6.1.4. Thermal Imaging in COVID-19

In the current COVID-19 pandemic, thermal cameras are receiving increasingly more interest. According to the U.S. Food and Drug Administration (FDA) [93], when used correctly, thermal imaging systems have been shown to accurately measure surface skin temperature without being physically close to the person being evaluated. Thermal imaging systems offer certain benefits in that other methods need a closer proximity or contact to measure temperature (for example, non-contact infrared thermometers or oral thermometers).
The FDA has issued guidance [94] to provide a policy to help expand the availability of telethermographic systems (thermal imaging-based systems) used for body temperature measurements for triage use over the duration of the public health emergency.
Telethermographic systems are capable of determining skin-surface temperature at a reference body site (e.g., oral, tympanic membrane, inner canthus) when coupled with software programs able to detect facial features such as the eyes, nose, and mouth. The resultant value allows estimations of core body temperature. One advantage of using telethermographic systems for initial temperature assessment is the potential use in high throughput areas (e.g., airports, businesses, warehouses, factories) and in settings where other temperature assessment products may be in short supply. As the screening system could potentially run autonomously and without the need for an attendant, time spent waiting for screening would see an overall reduction. The available scientific literature supports the use of telethermographic systems in the context of initial human temperature measurement during such a triage process [95]. Additionally, international standards and scientific literature have described guidelines for using telethermographic systems for initial temperature assessment for triage use and best practices for standardized performance testing of such products [96,97].
Thermal imaging techniques have been effectively trialed for the COVID-19 screening purposes recently. Existing thermal imaging tools coupled with the state-of-the-art machine learning algorithms have shown promising results for elevated temperature screening scenarios. Here, we discuss a few of them.
In a recently released white paper [98], Eagle Eye Networks presented that their tested cameras consistently reported temperatures within +/− 0.7 degrees Fahrenheit of measurements taken with a traditional thermometer. They conducted the experiment for approximately four weeks, testing thermal cameras for elevated temperature screening in some real-world situations as well as lab comparisons. The testing was done with individuals and pairs of subjects (Figure 10). As subjects entered the building, their temperatures were automatically captured by multiple thermal cameras. The thermal camera results were less than the handheld thermometer. The differences ranged from −0.4 to −1.0 degrees Fahrenheit (average difference of −0.7 degrees).
A drone-based networked system was built to combat the COVID-19 pandemic [99]. The thermal imaging system was the most notable feature of their solution. They used this feature to capture the images of people and implement social distancing measurements and for density-based thermal imaging. The drone was tested for COVID-19 operations in the Delhi/NCR, India.
Different organizations and workplaces look for solutions to prevent spreading the virus while making sure their employees and customers feel comfortable. Some real-world examples of using thermal screening for COVID-19 prevention are listed below.
  • Amazon Inc., Tyson Foods Inc., and Intel Corp have started to use thermal cameras at their warehouses to speed up screening for feverish workers who could be infected with the coronavirus [100].
  • Thermal camera systems have been deployed in several international airports such as Dulles international airport [101], Hong Kong international airport [101], Incheon international airport [102], and Canberra international airport [103] (Figure 11).
  • Asian cities have introduced temperature-check kiosks and smart bus shelters to make public transport safer [104,105] (Figure 11).
  • Universities have introduced thermal temperature screening stations [106].
Gostic et al. [107] analyzed the effectiveness of different traveler screening programs to limit further global spread of COVID-19. Thermal scanners used for the screening were one of their parameter values. They reported a sensitivity of infrared thermal scanners for fever detection was 70% with 60–90% being the plausible range. Similar studies estimated sensitivity between 60–88% [108,109,110]. However, a handful of studies estimated very low sensitivity (4–30%). In general, sensitivity depended on the device used, body area targeted, and ambient temperature.

6.2. Video Camera Imaging Technology

Video imaging is a passive and non-contact modality that can be delivered from common sources of video data including hand-held and fixed video cameras, webcams, smartphones, or from sensor platforms such as unmanned aerial vehicles (UAV) and robots, shown in Figure 12. Video analysis of vital signs generally relies on two phenomena. The first phenomenon known as color based methods or imaging photoplethysmography (iPPG) depends on skin color variations caused by cardiorespiratory activity. The second phenomenon, known as motion based methods, relies on cyclic body motion such as the motion of an arterial pulse, head movements, or movements within the thoracic and abdominal regions due to cardiorespiratory activity.
Video cameras have been shown to be able to measure heart rate in controlled clinical settings, in the outdoors, and from multiple people and at long range [32]. Recent work has demonstrated breathing rate in challenging scenarios such as from a drone camera platform [111]. Video cameras have the ability to both diagnose and identify people, which can be useful in many scenarios [112]. Their cost is low, their installed base is large, and the availability of video cameras that can be pressed into service is high.
Video cameras are often integrated with a microphone that might allow detection of coughing [113,114,115]. An important advantage of video cameras is the possibility of using advanced image processing techniques to detect posture and to recognize if a patient is coughing. From longer ranges, individuals in acute distress might be detected by their movements or if they should collapse [111].
A significant advantage of video cameras over all other sensors, including thermal cameras, is their ability to have arbitrary field of view, with the same camera capable of imaging bacteria and the ice caps of Mars with a change of lens [32].

6.2.1. Vital Signs Measured with a Webcam

Webcams have been used as low-cost video cameras to monitor vital signs in literature, as shown in Table 3. Blind source separation (BSS) was a common technique used in different studies to reduce motion artefacts. For example, Pho et al. [116] presented a robust and automatic technique using a webcam for monitoring heart rate based on blind source separation. As shown in Figure 13, in the proposed method, ROI was selected automatically using the Viola–Jones [117] algorithm. The raw signal was obtained using spatial averaging, and independent component analysis (ICA) was applied to get the desired source signal. Finally, FFT was done to attain a frequency spectrum where the frequency with maximum power in the spectrum was considered as the pulse signal. It was possible to monitor three subjects simultaneously using the proposed algorithm. However, only small movements and no illumination variations were considered in the proposed work. The second component obtained by ICA was considered as the desired source signal every time, which might be limiting. Purche et al. [118] and Feng et al. [119] also measured heart rate by means of a webcam based on ICA. Lewandoska et al. [120] introduced another non-contact method to monitor HR via a webcam, based on principle component analysis (PCA). They stated that PCA is better than ICA in terms of computational complexity and time; therefore, PCA can be an alternative technique if only heart rate needs to be measured. Nevertheless, subjects were instructed to be still during the data collection.
Using a webcam, Bousefsaf et al. [121] extracted both instantaneous HR and RR considering normal head movements based on continuous wavelet transform (CWT). Wu et al. presented a time-frequency analysis technique using CWT [122] and a motion resistant spectral peak tracking (MRSPT) technique [123] to monitor heart rate using webcam considering seven situations including participants who were running, driving, and doing fitness training. To extract heart rate from moving subjects using a webcam, Feng et al. [124] proposed a robust monitoring system based on adaptive color variation between green and red channels, followed by an adaptive bandpass filter (ABF). However, the above discussed methods are susceptible to illumination variation and limited to a short distance.
Signal decomposition is a common technique used to reduce illumination variations in literature. Cheng et al. [125] introduced a robust technique to measure heart rate using a webcam. To reduce illumination variations, the authors combined both joint blind source separation (JBSS) and ensemble empirical mode decomposition (EEMD) and considered background images as well, shown in Figure 14. However, the authors mainly considered controlled illumination variations, which confines the real time applicability of the proposed technique. Another study by Xu et al. [126] tried to minimize the effect of illumination variations by combining partial least squares (PLS) and multivariate empirical mode decomposition (MEMD) to measure heart rate by means of a webcam under varying illumination scenarios. However, only artificial illumination produced by an LED lamp was considered in the proposed method. The above two methods only considered illumination variations, but were susceptible to motion artefacts and again confined to short distances.

6.2.2. Vital Signs Measured with Digital Cameras

Digital cameras are the most popular means of capturing videos for monitoring vital signs. Chen et al. [127] introduced a robust system to extract heart rate using a digital camera based on reflectance decomposition on the green channel and signal decomposition based on EEMD to reduce the noise due to illumination variations, as shown in Figure 15. However, EEMD can misinterpret periodic illumination variations as physiological signals, particularly when the frequency is close to the normal pulse frequency range, specifically from 0.75 to 4 Hz. This system was further improved in [128] based on a multiple linear regression (MLR) model and Poisson distribution to suppress the effects of ambient light variation. Nevertheless, both systems are not appropriate for real time applications.
Lee et al. [129] proposed a different technique to measure heart rate using multi-order curve fitting (MOCF) to minimize noise artefacts, and recorded video of participants using digital cameras while they were watching television in a dark room. To remove the noise caused by illumination variations, the brightness signal was subtracted from the raw signal. Tarassenko et al. [130] demonstrated a novel technique using auto-regressive (AR) modelling and pole cancellation to suppress the aliased frequency components due to artificial light flicker. However, the above methods only considered illumination variations but affected by motion artefacts.
Using a digital camera, Al-Naji et al. [131] monitored cardiorespiratory signals combining both EEMD and ICA to reduce noise artefacts caused by both illumination variation and motion artefacts.
Arts et al. [132], Cobos-Torres et al. [133], and Gibson et al. [134] also used digital cameras to monitor heart and respiratory rates of infants in the neonatal intensive care unit (NICU).
A charged couple device (CCD) and complementary metal-oxide-semiconductor (CMOS) cameras were used by De Haan et al. [135,136], Wang et al. [137,138,139], and Yu et al. [140] to monitor vital signs. However, most of the above discussed methods have limitations including short measurement range, considering either motion artefacts or illumination variations and a single participant at a time.

6.2.3. Vital Signs Measured with Other Sensors

The built-in camera of a smartphone was used by Kwon et al. [141] to monitor heart rate, and they developed an iPhone-based application, FaceBEAT, for remotely monitoring heart rate. Bernacchia et al. [142] extracted heart rate and respiratory rate using a Kinect device based on ICA. Smilkstein et al. [143] and Gambi et al. [144] also exploited Microsoft Kinect device for measuring heart rate using the Eulerian video magnification technique (EVM).
Al-Naji et al. [145] first used a hovering UAV, as shown in Figure 16, to remotely monitor heart and respiratory rate based on improved video magnification technique. The authors used advanced signal processing techniques including signal decomposition using complete EEMD (CEEMD) and BSS using ICA to reduce noise artefacts caused by motion artefacts and illumination variations. A robust non-contact technique was proposed in [146] to calculate cardiorespiratory signal via a digital camera as well as a hovering UAV. Figure 17 shows scenarios for remote optical detection of vital signs. Drone cameras have successfully measured human vital signs, while facing challenges in image stability and vibration. Using stationary cameras with short focal lengths, there has been consideration of extending the capability to multiple people who might be talking and moving. With longer focal lengths, issues of vibration, optical quality, and atmosphere start to dominate. In the proposed method, first, video is magnified using video magnification technique (Figure 18). Then, a ROI was selected, and raw cardiorespiratory signal was extracted. To remove noise artefacts caused by illumination variations and movement of subjects and cameras, the authors used a noise reduction technique integrating both complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and canonical correlation analysis (CCA). For spectral analysis, FFT was used, and for filtering, two Butterworth bandpass filters were applied. Finally, the MATLAB built-in function “findpeaks” was used to find the number of peaks to measure heart and respiratory rates. Using the proposed method, it was possible to measure vital signs of six people simultaneously, with a range of measurement of up to 60 m, under both stationary and non-stationary situations. Moreover, a graphical user interface (GUI) was introduced that enabled a user to load video data, select the type of magnification, and execute the proposed algorithm.

6.3. Combinations of Different Technologies

To enhance the performance of monitoring vital signs, different technologies can be combined, as shown in Table 4. For example, by combining an RGB camera, a monochrome camera with color filter, and a thermal camera, a study by Gupta et al. [147] proposed an efficient multicamera measuring system to measure HR and HRV. In the proposed method, face segmentation was done using conditional regression forests (CRF) to detect facial landmarks in real-time, and an ROI was selected. The raw signal was calculated based on a spatial average of the pixels within the ROI for each channel, i.e., red, green, blue, magenta, and thermal (RGBMT). To recover the underlying source signals, ICA was used, and a bandpass filter was applied to the selected source signal. Finally, peak detection was performed to calculate HR and HRV. The experimental results showed that the GRT (green, red, and thermal) channel combination gave the most accurate results, demonstrating that the inclusion of more spectral channels could attain more robust and accurate measurements.
Hu et al. [148] presented a dual-mode imaging technique based on visible and long-wave infrared wavelengths to extract breathing rate and pattern remotely using both an RGB and a thermal camera. After performing image registration, to identify a ROI in the RGB image, the cascade object detector based on the Viola–Jones algorithm and the screening technique using biological characteristics were used. Next, to select the corresponding regions in the thermal image, linear coordinate mapping was applied. To select the interest points from ROIs in the visible light gray images, the Shi–Tomasi corner detection method derived from the Harris–Stephens was applied. Cross-spectrum ROI tracking was attained using linear coordinate mapping. After that, the mean pixel intensity was obtained within the ROI of the thermal image. In all of the frames, the raw pixel intensities of ROIs were smoothed by the moving average filter and smoothing method. Finally, breathing rate was extracted from the smoothed signal. By adding RGB images, it was possible to detect and track the face and facial tissue in thermal images more accurately and faster. However, the above systems may not be feasible for measuring vital signs at night.
Another dual camera imaging system was proposed in [149] to measure BR and HR at night using an RGB infrared camera and thermal imager. Image registration was done based on affine transformation, face detection was performed using a pre-trained boosted Cascade classifier or fully convolutional network, and a discriminative regression-based approach was used to select landmarks on the face. IR images were used to select ROI from thermal images using linear coordinate mapping. ROI tracking in thermal images was done by spatio-temporal context learning (STC). A moving average filter was applied to the raw signal and finally, HR and BR were calculated.
Bennett et al. [150] used both a thermal and optical camera to monitor heart rate and blood perfusion based on EVM. After identifying ROI from both thermal and optical videos, the average intensity was computed within the ROI. After that, a Butterworth lowpass filter was used to filter these signals, and then normalized. The experimental results of blood perfusion showed that optical video with EVM was sensitive to skin color and lighting conditions, while thermal video with EVM was not sensitive to these variables. However, in the proposed method, movement of the subject was not considered, and the sample size was small. Although a temperature response caused by blood occlusion was sensed by the thermal camera, it was not in the pulsatile manner of perfusion.

7. Hypoxemia Detection Using Video Cameras

Blood oxygen saturation (SpO2), alongside heart rate, respiratory rate, and body temperature, is an important vital sign. It is a relative measure of the amount of oxygenated hemoglobin with respect to the total amount of hemoglobin in the blood. It shows whether an individual has an adequate supply of oxygen, hence indicating the condition of the cardiorespiratory system. In detecting hypoxemia, it is important to monitor the level of oxygen saturation continuously.
In recent years, different studies used video cameras to measure blood oxygen saturation remotely either using ambient light or a dedicated light source based on the ratio-of-ratios principle that calculates relative pulsatility at different wavelengths. Some studies used monochrome cameras and others used RGB cameras to capture video of participants.

7.1. SpO2 Measured with Monochrome Camera

Wieringa et al. [151] first introduced an “SpO2 camera” technology using a CMOS camera and LED-ringlight at three wavelengths of 660, 810, and 940 nm. However, they did not present any SpO2 results due to the poor signal-to-noise ratio of the photoplethysmographic signals they obtained. Humphreys et al. [152,153] also demonstrated a noncontact pulse oximeter using a CMOS camera and LED arrays at two different wavelengths (760 and 880 nm). Though the authors did an experiment on ten volunteers, it was not possible to measure SpO2 accurately due to large noise in the PPG signal caused by the low frame rate and illumination variations.
Another study by Kong et al. [154] first proposed a noncontact scheme to screen blood oxygen saturation using two CCD cameras, individually mounted with a different narrow band pass filter at two different wavelengths, 520 and 660 nm, under ambient light. The experimental results showed good agreement between the proposed method and the reference pulse oximeter. However, the study only covered a small SpO2 range (97–99%). Moreover, the proposed method was highly susceptible to motion artifacts. On the other hand, Saho et al. [155] presented a contactless technique for monitoring SpO2 using a CMOS camera and an illumination system including two identical LED arrays varying at two wavelengths, 611 and 880 nm. To evaluate the proposed technique, they did an experiment over an SpO2 range of 83–98%, and experimental results were consistent with those obtained by a pulse oximeter. However, the proposed method required a dedicated light source, which could irritate the participants. Moreover, the system was constrained by motion artefacts, light scattering, and short range potential.
Verkruysse et al. [156] investigated the feasibility of calibrating noncontact pulse oximetry using two monochrome cameras, individually mounted with spectral bandpass filters at center wavelengths of 675 nm (red) and 842 nm (IR), respectively. They did an experiment on 26 healthy subjects under hypoxic and normoxic situations (83–100%) and slow cooling. However, the proposed method was highly affected by low pulsatile strength and motion artefacts. Another study by Moço et al. [157] also assessed the viability of calibration of contactless SpO2 using the ratio of ratios of red and green light. They used three monochrome cameras, respectively, mounted with bandpass filters with green, red, and near-infrared wavelengths of 580, 675, and 840 nm to capture the video of 46 healthy volunteers during hypoxic and normoxic situations (85–100%) and under slow cooling. The authors demonstrated that SpO2 could be calibrated with red and green light under controlled settings; however, the accuracy was less than that measured in the typical red-NIR spectrum.
Gastel et al. [158] first introduced a motion-robust contactless monitoring system to measure SpO2 based on a new principle using an adaptive PBV method. They used three identical monochrome cameras each equipped with optical filters with a center wavelength of 760, 800, and 840 nm. To validate the method, they did a test on healthy moving volunteers whose SpO2 levels varied from 80 to 100%, caused by breath-holding actions in order to induce hypoxemia. The experimental results showed that the new principle outperformed the existing noncontact ratio-of-ratios based approaches. However, the number of participants considered to evaluate the proposed method was very limited.

7.2. SpO2 Measured with RGB Camera

Instead of using a monochrome camera, a few researchers used RGB cameras to monitor SpO2 based on normalized ratio of red and blue channel. For example, Tarassenko et al. [130] introduced a noncontact method to monitor SpO2 using a digital video camera under ambient light. Unlike others, they did a large sample size experiment on 46 patients in the Oxford Kidney Unit, whose SpO2 varied from 87% to 95%. However, calibration was a big issue for calculating SpO2. Using a webcam and ambient light, Bal et al. [159] calculated SpO2 based on a skin detector to only identify skin pixels from the selected face region. To evaluate their method, they did a test on six healthy participants and three paediatric intensive care unit (PICU) patients with results showing a strong correlation in comparison with a Masimo oximeter, covering the SpO2 range between 93% and 99%.
Another study by Guazzi et al. [160] proposed a novel method to monitor changes in oxygen saturation using an RGB camera and two LED lighting panels in a controlled environment. An automated ROI-selection process was adopted by calculating signal-to-noise ratios for each ROI. To validate the proposed method, an experiment was done on five healthy participants considering an SpO2 range of 80–100% for 40 minutes each in a purpose-built chamber. However, the proposed method was affected by factors such as illumination variations, camera spectral responses, and physiological issues like skin type and melanin concentration. A low cost noncontact method was introduced by Rosa et al. [161] using a Raspberry Pi camera and ambient light to monitor SPO2 based on the Eulerian video magnification (EVM) method. The EVM was employed to amplify the light absorption variations of red and blue channels. The proposed algorithm showed good accuracy in comparison with those measured with a commercial pulse oximeter. However, the method was constrained to SpO2 range 92–99%, a small population, and short distance.
From the above discussion, we have found that most of the methods used a commercial pulse oximeter as a ground truth for validation purposes, which itself has some errors. Moreover, most of the methods were constrained to a small test population, motion artefacts, illumination variations, short distance, and a small range of SpO2. It is necessary to be able to monitor SpO2 over a broader range (at least 80% to 100%) for real applications, especially for clinical applications. A controlled hypoxemia test could be helpful to drive more variation in the oxygen level of blood.

8. Acoustic Detection of Respiratory Infection

Cough detection via acoustic sensing has been a topic of interest for many years. This approach is relatively non-intrusive, and can be completely non-contact, requiring only the presence of an acoustic sensing device within a reasonable proximity (primarily governed by the level of background noise) and a device to facilitate the requisite signal processing. Some implementations have demonstrated use of a contact microphone [162,163,164,165]; however, many modern approaches have adopted the ubiquitous smart-phone for audio capture (and in some instances, also processing) device [166,167,168,169]. A cough event lasts somewhere between 300 [170] and 650 ms [171], and can be described by three distinct phases: the expulsive phase, the intermediate phase, and the voiced phase [172].
It is common to divide the cough detection system into two distinct stages, data pre-processing for feature extraction and machine learning for classification. The primary difference between the approaches are the choice of features and the type of network used for inference. A commonality observed in the data pre-processing stage is the down-sampling of the input signal. Many recording devices capture audio at 44.1 kHz. It appears that this is unnecessarily high for cough detection, with authors reporting success with 22 kHz [171], 16 kHz [173], 11.25 kHz [170], and even down to 400 Hz [168]; each was able to achieve a classification accuracy of greater than 90% through various methods. A down-sampled signal reduces the computational requirements in the subsequent feature extraction process; thus, it is beneficial for embedded applications.
The features extracted for the classifier vary between applications. A recent implementation shown in [170] provides a good example of the diversity in features. The signal was first split into 75 ms windows, with 19 ms overlap (to avoid boundary effects). The justification for this window size was that it allows accurate spectral estimation, while accounting for the non-stationary nature of the signal. Each 75 ms frame was broken down into five frequency bands, ranging from 0 to 5.525 kHz (without overlap). A feature set was extracted from each band in a given frame, giving the spectral centroid, bandwidth, crest factor, flatness, flux, roll-off, peak energy, Renyi entropy, kurtosis, skewness, 90% to 50% energy frequency ratio, relative power, harmonic ratio, Mel-frequency cepstrum coefficients, audio spectrum flatness, normalized audio spectrum envelope, tonal index, chromatic entropy, and sub-band spectral centroid histograms. A total of 117 features were extracted for each window; however, feature selection was applied to remove redundant sources of information, reducing the number of features to 29. A support vector machine (SVM) was used for training and classification, yielding accuracy of around 90% sensitivity, and 80% specificity.
A similar approach can be seen in Matos et al. [162], who used a keyword-spotting approach to detect cough sounds with a hidden Markov model (HMM) classifier. The work in [164] uses spectral, noise, and prosody features for feature extraction and tests their efficacy with three separate classifiers: an artificial neural network (ANN), Gaussian mixture model (GMM), and SVM. It was shown that the GMM performs best for most feature sets. There are other applications, seen in [165,166,168,171,174,175,176,177,178], which use some subset of the above mentioned features (except for [173], which uses gammatone frequency cepstral coefficients), in conjunction with varying machine learning classifiers such as the logistic regression model [178], k nearest neighbour [166], binary tree classifier [168], SVM [178], and convolutional neural network (CNN) [171,177].
It is noted that in most cases, a combination of spectral and cepstral features is extracted from sub-band frames. The cepstrum is typically used for speech detection but also appears to contain a significant amount of information pertaining to the detection of coughs, due to its prevalence in the literature. Interestingly, there are very few applications that make use of the CNN architecture, despite its efficacy in relevant fields such as environmental sound classification [179], heart sound feature extraction [180], snore sound classification [181], and bird sound classification [182]. In such applications, it is common to simply use the mel-spectrogram of the audio signal as the two-dimensional input to the network [179]. It is suggested that various techniques may also be used to augment the dataset, such as time stretching, pitch shifting, dynamic range compression, and the addition of background noise [179]. Such techniques may be transferred to a cough-detection data set, which is inherently sparse due to the nature of the signal [173]. Recurrent neural networks may also be a viable alternative; although being more computationally intensive, they are proven to perform well in speech recognition tasks [183].
It is noted that there are several limiting factors that affect the efficacy of cough detection systems in practice. The signal to noise ratio is often much higher in training data than in practice, due to the nature of the data collection process. It is recommended that training data contains a significant amount of noise, as it is common for other sounds such as laughter, sneezing, talking, and throat clearing to be detected as false positives [164,169]. The specificity of the network to a given person and recording device also pose limitations when applied to new scenarios. The work presented by Barata et al. [171] addressed the issue of device specificity by capturing samples with many different types of microphone. The results indicate a reduced classification accuracy (by approximately 5%) when the classifier is run on recordings from a microphone not used as part of the training process. This reinforces the need for a diverse training set if the developers seek hardware abstraction. Computational resource requirements pose limitations to any embedded real-time application. Testing has not been conducted with CNN architectures; however, it is noted that the standard approach of spectral and cepstral feature extraction and classification can be heavily resource-consuming, costing computational resources and battery life [166]. This is an impediment to embedded mobile applications and must be taken into consideration during the design process. Larson et al. [169] demonstrates a unique approach to this problem. The spectral and cepstral features are extracted from the mobile device, and sent to a cloud-based classifier, thus reducing the amount of processing required by the embedded system.
As a final note, relevant to the current situation, it is possible that further development could help to discriminate between a productive cough and an unproductive cough. An example of such work can be seen in Murata et al. [184], which, if successfully implemented, could assist in discriminating between subjects with COVID-19 and other illnesses.

9. Video Based Cough Detection

Many cough detection studies are focused on detecting the cough using the sound the person makes. Sometimes, this is not reliable, for example, in cases where there is no sufficiently strong sound signal to detect. Such scenarios may include people coughing while covering their mouth or being too weak to make a sound while coughing.
Video based cough detection provides an alternative to such problems and can also be used to cross-validate the cough detection of acoustic models. Here, we are reviewing the few notable studies focused on cough detection using visual features.
Cough recognition is widely studied under general action recognition. Some popular action recognition datasets contain coughing action classes. NTU RGB + D dataset [185] has included coughing or sneezing clips in a separate class in their datasets. UWA3D dataset [186] also contains a coughing action class of 71 clips. ALMOND [113] is a dataset created by merging several action recognition datasets. Its coughing/sneezing class contains a total of 1019 clips collected from NTU RGB + D and UWA3D datasets. Thi et al. [114] presented a new dataset targeting sneezing and coughing detection in videos. Their Sneeze–Cough dataset was created to help recognize flu-like behavior symptoms in public areas. The dataset contains 960 video clips collected from 20 human subjects (eight females and 12 males) of 20 to 50 years old.
Compared to acoustic cough detection, a relatively low number of studies have been conducted for video-based cough detection. A cough counting study was performed by Smith et al. [187] by comparing video and audio recordings. They studied eight patients with chronic cough overnight in laboratory conditions. Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. They proposed to use ambulatory digital audio recordings as the gold standard for ambulatory validation of automated cough monitoring devices. Another notable study was performed to recognize flu-like symptoms from videos [114]. The authors developed a new family of kernels that explicitly integrate space–time layout and Bag-of-Words representations. The proposed algorithm was evaluated on the new Sneeze–Cough dataset. Buzzelli et al. [113] demonstrated a relatively similar but more generalized vision-based system for monitoring common actions of elderly people at home. The proposed method consisted of deep learning models for subject localization and action recognition.
Cough detection using drones has gained the attention of drone manufacturers [188]. Identifying subjects with coughing symptoms from the crowds could be helpful to detect potential patients with respiratory illnesses [189] (Figure 19). High-resolution cameras, faster data transmission links, high performing computers, and action recognition technologies could be effectively used for such solutions.

10. Discussion

We have shown that camera imaging techniques using thermal cameras and video cameras, separately or in concert, have been considered for measuring physiological parameters and symptoms of illness. Most of the techniques have been developed in laboratory and clinical environments and could best be described as “non-contact” rather than remote. In general, measurements are under controlled lighting, indoors and at close range, with some notable exceptions.
The challenge is to achieve detection over ranges more associated with the field of noncontact sensing, from long range and moving platforms. In the remote sensing field, achieving mapping or survey objectives is common, yet the research considering non-contact sensing of illness was focused on small areas, static observation, and an approach focused on the individual rather than the ailment. Even the studies with drone measurement of cardiorespiratory parameters were done at quite close range, albeit limited mainly by the characteristics of the modest sensors used.
No concerted effort to develop sensing of this most devastating of situations, a coronavirus, influenza, or any other type of pandemic, has been undertaken. From the literature we find not efforts to undertake such missions, but rather technological indications that there is an underlying feasibility. The domain seems to be wide open to research in noncontact sensing, but the technological hurdles are significant.
We have shown that at short range acoustics might serve to detect coughs and perhaps sneezes. At longer ranges, it should be possible to detect the actions associated with coughing, and by analogy sneezing.
Thermal cameras have the potential to measure all likely symptoms on a single camera, especially temperature, yet they are expensive and currently of comparatively low resolution. Research to date, and the approved usage of such devices, is based on short ranges with thermal calibration and a reliance on detecting anomalous individual temperature, rather than absolute temperature. Range will continue to be limited by the resolution of thermal sensing technology and focusing elements. Thermal imaging techniques are susceptible to slight heat variations and some other complex factors such as head rotation and motion artefacts. Moreover, these techniques are also susceptible to environmental thermal noise caused by the variations of background temperature.
Video cameras, due to their availability, cost, adaptability, and compatibility, are an attractive option, but they will not be able to directly sense temperature and might at best be able to infer it from elevated heart rate or breathing rate of a population compared to statistical norms. The resolution potential of visible light imaging means that they will be useful for long range detection of symptoms that cause gestures, postures, or gaits.
All imaging techniques that use intensity information, thermal imaging, and color based techniques require a clear ROI. These techniques may not always be feasible if a subject is covered by a blanket, face mask, or intubation equipment. Motion-based techniques can be used in such situations, although they may be more limited in range and measurement scenarios, particularly if the sensing platform is in motion.
At this stage, comprehensive studies of COVID-19 noncontact sensing have yet to be done, although there has been a substantial deployment of thermal imaging equipment for screening purposes. It is worth considering some appropriate configurations of COVID-19 sensing for different scenarios, based on the analysis above.
It is likely that in exigent circumstances, a normal visible light webcam can provide some primary indications of COVID-19 symptoms, particularly with a subject at rest, and particularly if they have a history of resting vital signs recorded. Heart rate, SpO2, and breathing rate parameters are all measurable and indicative of different aspects of the illness. The microphones on most web cameras also allow for cough detection, another strong indicator of the illness. This solution has the advantage of cost, potentially less than $20 USD when used with an existing computing device. Mobile telephones have both cameras, computing and microphones. Options exist with this solution being universally accessible.
A more comprehensive solution might consist of a LWIR camera with or without a black body calibration device. Frequently, these solutions include a visible light camera to classify objects and parts of the body, particularly the face. This combination of sensors might just be used to measure temperature, but advanced systems might include software to measure vital signs using a combination of thermal and visible image processing. At present, these sorts of systems are expensive, requiring expensive thermal cameras, dedicated computing, and a dedicated platform.

11. Conclusions

Noncontact sensor modalities, especially thermal and video cameras, could provide a technique to screen large numbers of people for fever, cough, increased respiratory rate, and heart rate with no risk to operators. Such techniques might provide more sensitive screening methods for detecting COVID-19 than the current method of screening for fever alone. A more useful outcome would be new methods for control and response to contagion in human populations through surveillance and mapping.
Our review has found that the principles of noncontact detection have been demonstrated in indoor or highly controlled environments and mostly at quite short range. The challenge to the noncontact sensing community is to extend these techniques to longer ranges and make them robust enough to be effective on mobile and airborne platforms. The topic is of critical importance and represents a significant research opportunity given the unprecedented harm done to the world’s economy, society, and stability by the pandemic.

Author Contributions

F.-T.-Z.K., L.A.C., and J.S.C. did the literature review and wrote the draft manuscript. A.A.-N., A.G.P., and J.C. supervised, reviewed the work, and contributed with valuable discussions and advice. D.W., Y.H.L., T.T.O., S.T., T.X.B.N., T.D.M., S.P.P., Y.T., J.L.M., and J.H. contributed to technology analysis and demonstration. J.C. initiated and guided the study. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The University of South Australia was awarded a contract by Vital Intelligence Inc. Canada to research remote detection of pandemic symptoms.

References

  1. Wu, Y.-C.; Chen, C.-S.; Chan, Y.-J. The outbreak of COVID-19: An overview. J. Chin. Med. Assoc. 2020, 83, 217–220. [Google Scholar] [CrossRef]
  2. Sohrabi, C.; Alsafi, Z.; O’Neill, N.; Khan, M.; Kerwan, A.; Al-Jabir, A.; Iosifidis, C.; Agha, R. World Health Organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19). Int. J. Surg. 2020, 76, 71–76. [Google Scholar] [CrossRef]
  3. 2019-nCoV Situation Report 51. World Health Organization. Available online: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situationreports/.2020 (accessed on 3 April 2020).
  4. 2019-nCoV Situation Report 71. World Health Organization. Available online: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situationreports/.2020 (accessed on 3 April 2020).
  5. Zhou, P.; Yang, X.-L.; Wang, X.-G.; Hu, B.; Zhang, L.; Zhang, W.; Si, H.-R.; Zhu, Y.; Li, B.; Huang, C.-L. A pneumonia outbreak associated with a new coronavirus of probable bat origin. Nature 2020, 579, 270–273. [Google Scholar] [CrossRef] [Green Version]
  6. Gorbalenya, A.E. The species Severe acute respiratory syndrome-related coronavirus: Classifying 2019-nCoV and naming it SARS-CoV-2. Nat. Microbiol. 2020, 5, 536–544. [Google Scholar]
  7. Adhikari, S.P.; Meng, S.; Wu, Y.-J.; Mao, Y.-P.; Ye, R.-X.; Wang, Q.-Z.; Sun, C.; Sylvia, S.; Rozelle, S.; Raat, H. Epidemiology, causes, clinical manifestation and diagnosis, prevention and control of coronavirus disease (COVID-19) during the early outbreak period: A scoping review. Infect. Dis. Poverty 2020, 9, 1–12. [Google Scholar] [CrossRef] [Green Version]
  8. Liu, C.-C.; Chang, R.-E.; Chang, W.-C. Limitations of forehead infrared body temperature detection for fever screening for severe acute respiratory syndrome. Infect. Control Hosp. Epidemiol. 2004, 25, 1109–1111. [Google Scholar] [CrossRef] [PubMed]
  9. Qiu, H.; Wu, J.; Hong, L.; Luo, Y.; Song, Q.; Chen, D. Clinical and epidemiological features of 36 children with coronavirus disease 2019 (COVID-19) in Zhejiang, China: An observational cohort study. Lancet Infect. Dis. 2020, 20, 689–696. [Google Scholar] [CrossRef] [Green Version]
  10. Kelvin, A.A.; Halperin, S. COVID-19 in children: The link in the transmission chain. Lancet Infect. Dis. 2020, 20, 633–634. [Google Scholar] [CrossRef] [Green Version]
  11. Zhou, F.; Yu, T.; Du, R.; Fan, G.; Liu, Y.; Liu, Z.; Xiang, J.; Wang, Y.; Song, B.; Gu, X. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: A retrospective cohort study. Lancet 2020, 355, 1054–1062. [Google Scholar] [CrossRef]
  12. Mehra, M.R.; Ruschitzka, F. COVID-19 Illness and Heart Failure: A Missing Link? JACC Heart Fail. 2020, 8, 512–514. [Google Scholar] [CrossRef]
  13. Sun, Q.; Qiu, H.; Huang, M.; Yang, Y. Lower mortality of COVID-19 by early recognition and intervention: Experience from Jiangsu Province. Ann. Intensive Care 2020, 10, 1–4. [Google Scholar] [CrossRef] [Green Version]
  14. Turner, A.J.; Hiscox, J.A.; Hooper, N.M. ACE2: From vasopeptidase to SARS virus receptor. Trends Pharmacol. Sci. 2004, 25, 291–294. [Google Scholar] [CrossRef] [PubMed]
  15. Keidar, S.; Kaplan, M.; Gamliel-Lazarovich, A. ACE2 of the heart: From angiotensin I to angiotensin (1–7). Cardiovasc Res. 2007, 73, 463–469. [Google Scholar] [CrossRef]
  16. Sun, P.; Lu, X.; Xu, C.; Sun, W.; Pan, B. Understanding of COVID-19 based on current evidence. J. Med. Virol. 2020, 82, 548–551. [Google Scholar] [CrossRef] [PubMed]
  17. Huang, C.; Wang, Y.; Li, X.; Ren, L.; Zhao, J.; Hu, Y.; Zhang, L.; Fan, G.; Xu, J.; Gu, X. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 2020, 395, 497–506. [Google Scholar] [CrossRef] [Green Version]
  18. Zheng, Y.-Y.; Ma, Y.-T.; Zhang, J.-Y.; Xie, X. COVID-19 and the cardiovascular system. Nat. Rev. Cardiol. 2020, 17, 259–260. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. COVID-19 Testing in Australia—Information for Health Professionals. Therapeutic Goods Administration, Department of Health, Australian Government. Available online: https://www.tga.gov.au/node/904153 (accessed on 2 April 2020).
  20. Ai, T.; Yang, Z.; Hou, H.; Zhan, C.; Chen, C.; Lv, W.; Tao, Q.; Sun, Z.; Xia, L. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases. Radiology 2020, 296, E32–E40. [Google Scholar] [CrossRef] [Green Version]
  21. Cascella, M.; Rajnik, M.; Cuomo, A.; Dulebohn, S.C.; Di Napoli, R. Features, evaluation and treatment coronavirus (COVID-19). In StatPearls [Internet]; National Center for Biotechnology Information, U.S. National Library of Medicine, StatPearls Publishing: Treasure Island, FL, USA, 2020. [Google Scholar]
  22. Body Temperature Norms. MedlinePlus (Internet). Bethesda (MD): National Library of Medicine (US). Available online: https://medlinelineplus.gov/ency/article/001982.htm (accessed on 31 March 2020).
  23. Updated WHO Advice for International Traffic in Relation to the Outbreak of the Novel Coronavirus 2019-nCoV. World Health Organization. Available online: https://www.who.int/ith/2020-24-01-outbreak-of-Pneumonia-caused-by-new-coronavirus/en/ (accessed on 2 April 2020).
  24. Bwire, G.M.; Paulo, L.S. Coronavirus disease-2019: Is fever an adequate screening for the returning travelers? Trop. Med. Health 2020, 48, 1–3. [Google Scholar] [CrossRef]
  25. Du, Z.; Xu, X.; Wu, Y.; Wang, L.; Cowling, B.J.; Meyers, L.A. The serial interval of COVID-19 from publicly reported confirmed cases. medRxiv 2020, 10, 19-20025452. [Google Scholar]
  26. Bai, Y.; Yao, L.; Wei, T.; Tian, F.; Jin, D.-Y.; Chen, L.; Wang, M. Presumed asymptomatic carrier transmission of COVID-19. JAMA 2020, 323, 1406–1407. [Google Scholar] [CrossRef] [Green Version]
  27. Li, K.; Wang, S.; Du, C.; Huang, Y.; Feng, X.; Zhou, F. Accurate fatigue detection based on multiple facial morphological features. J. Sens. 2019, 2019, 1–10. [Google Scholar] [CrossRef] [Green Version]
  28. Wu, P.; Duan, F.; Luo, C.; Liu, Q.; Qu, X.; Liang, L.; Wu, K. Characteristics of Ocular Findings of Patients With Coronavirus Disease 2019 (COVID-19) in Hubei Province, China. JAMA Ophthalmol. 2020, 138, 575–578. [Google Scholar] [CrossRef]
  29. Cretikos, M.A.; Bellomo, R.; Hillman, K.; Chen, J.; Finfer, S.; Flabouris, A. Respiratory rate: The neglected vital sign. Med. J. Aust. 2008, 188, 657–659. [Google Scholar] [CrossRef] [PubMed]
  30. Wu, Z.; McGoogan, J.M. Characteristics of and important lessons from the coronavirus disease 2019 (COVID-19) outbreak in China: Summary of a report of 72 314 cases from the Chinese Center for Disease Control and Prevention. JAMA 2020, 323, 1239–1242. [Google Scholar] [CrossRef]
  31. Karjalainen, J.; Viitasalo, M. Fever and cardiac rhythm. Arch. Intern. Med. 1986, 146, 1169–1171. [Google Scholar] [CrossRef] [PubMed]
  32. Khanam, F.-T.-Z.; Al-Naji, A.; Chahl, J. Remote Monitoring of Vital Signs in Diverse Non-Clinical and Clinical Scenarios Using Computer Vision Systems: A Review. Appl. Sci. 2019, 9, 4474. [Google Scholar] [CrossRef] [Green Version]
  33. Chen, X.; Cheng, J.; Song, R.; Liu, Y.; Ward, R.; Wang, Z.J. Video-Based Heart Rate Measurement: Recent Advances and Future Prospects. IEEE Trans. Instrum. Meas. 2018, 68, 3600–3615. [Google Scholar] [CrossRef]
  34. Al-Naji, A.; Gibson, K.; Lee, S.-H.; Chahl, J. Monitoring of cardiorespiratory signal: Principles of remote measurements and review of methods. IEEE Access 2017, 5, 15776–15790. [Google Scholar] [CrossRef]
  35. Jasti, N.; Bista, S.; Bhargav, H.; Sinha, S.; Gupta, S.; Chaturvedi, S.; Gangadhar, B. Medical Applications of Infrared Thermography: A Narrative Review. J. Stem Cells 2019, 14, 35–53. [Google Scholar]
  36. Gade, R.; Moeslund, T.B. Thermal cameras and applications: A survey. Mach. Vis. Appl. 2014, 25, 245–262. [Google Scholar] [CrossRef] [Green Version]
  37. Lloyd, C. How Does Thermal Imaging Work? Available online: https://www.howtogeek.com/294076/how-does-thermal-imaging-work/ (accessed on 10 December 2020).
  38. Bouzida, N.; Bendada, A.; Maldague, X.P. Visualization of body thermoregulation by infrared imaging. J. Therm. Biol. 2009, 34, 120–126. [Google Scholar] [CrossRef]
  39. Thermal Imaging for Detecting Elevated Body Temperature. FLIR Systems. Available online: https://www.flir.com.au/discover/public-safety/thermal-imaging-for-detecting-elevated-body-temperature (accessed on 22 May 2020).
  40. Ng, E.Y.; Kawb, G.; Chang, W. Analysis of IR thermal imager for mass blind fever screening. Microvasc. Res. 2004, 68, 104–109. [Google Scholar] [CrossRef]
  41. Bilodeau, G.-A.; Torabi, A.; Lévesque, M.; Ouellet, C.; Langlois, J.P.; Lema, P.; Carmant, L. Body temperature estimation of a moving subject from thermographic images. Mach. Vis. Appl. 2012, 23, 299–311. [Google Scholar] [CrossRef]
  42. Aubakir, B.; Nurimbetov, B.; Tursynbek, I.; Varol, H.A. Vital Sign Monitoring Utilizing Eulerian Video Magnification and Thermography. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 3527–3530. [Google Scholar]
  43. Sharma, A.; Yadav, A.R. Image Processing Based Body Temperature Estimation Using Thermal Video Sequence. In Proceedings of the 2017 International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 18–19 July 2017; pp. 846–852. [Google Scholar]
  44. Lin, J.-W.; Lu, M.-H.; Lin, Y.-H. A Thermal Camera Based Continuous Body Temperature Measurement System. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, South Korea, 27–28 October 2019; pp. 1681–1687. [Google Scholar]
  45. Sumriddetchkajorn, S.; Chaitavon, K. Field test studies of our infrared-based human temperature screening system embedded with a parallel measurement approach. Infrared Phys. Technol. 2009, 52, 119–123. [Google Scholar] [CrossRef]
  46. Silawan, N.; Kusukame, K.; Kek, K.J.; Kuan, W.S. A Novel Environment-Invariant Core Body Temperature Estimation for High Sensitivity and Specificity Fever Screening. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1612–1615. [Google Scholar]
  47. Thomas, J.B.; Pahler, L.; Handy, R.; Thiese, M.S.; Schaefer, C. Pilot study predicting core body temperatures in hot work environments using thermal imagery. J. Chem. Health Saf. 2019, 26, 75–83. [Google Scholar] [CrossRef]
  48. Yang, M.; Liu, Q.; Turner, T.; Wu, Y. Vital Sign Estimation from Passive Thermal Video. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  49. Murthy, R.; Pavlidis, I. Noncontact measurement of breathing function. Ieee Eng. Med. Biol. Mag. 2006, 25, 57–67. [Google Scholar] [CrossRef]
  50. Murthy, R.; Pavlidis, I.; Tsiamyrtzis, P. Touchless Monitoring of Breathing Function. In Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004; pp. 1196–1199. [Google Scholar]
  51. Fei, J.; Pavlidis, I. Analysis of Breathing Air Flow Patterns in Thermal Imaging. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 946–952. [Google Scholar]
  52. Fei, J.; Zhu, Z.; Pavlidis, I. Imaging Breathing Rate in the CO2 Absorption Band. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006; pp. 700–705. [Google Scholar]
  53. Sun, N.; Garbey, M.; Merla, A.; Pavlidis, I. Imaging the Cardiovascular Pulse. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 416–421. [Google Scholar]
  54. Garbey, M.; Sun, N.; Merla, A.; Pavlidis, I. Contact-free measurement of cardiac pulse based on the analysis of thermal imagery. IEEE Trans. Biomed. Eng. 2007, 54, 1418–1426. [Google Scholar] [CrossRef]
  55. Chekmenev, S.Y.; Rara, H.; Farag, A.A. Non-Contact, Wavelet-Based Measurement of Vital Signs Using Thermal Imaging. In Proceedings of the First International Conference on Graphics, Vision, and Image Processing (GVIP), Cairo, Egypt, 17–22 June 2007; pp. 107–112. [Google Scholar]
  56. Fei, J.; Pavlidis, I. Thermistor at a distance: Unobtrusive measurement of breathing. IEEE Trans. Biomed. Eng. 2009, 57, 988–998. [Google Scholar]
  57. Shakhih, M.F.M.; Wahab, A.A.; Salim, M.I.M. Assessment of inspiration and expiration time using infrared thermal imaging modality. Infrared Phys. Technol. 2019, 99, 129–139. [Google Scholar] [CrossRef]
  58. Pereira, C.B.; Yu, X.; Czaplik, M.; Rossaint, R.; Blazek, V.; Leonhardt, S. Remote monitoring of breathing dynamics using infrared thermography. Biomed. Opt. Express 2015, 6, 4378–4394. [Google Scholar] [CrossRef] [PubMed]
  59. Barbosa Pereira, C.; Czaplik, M.; Blazek, V.; Leonhardt, S.; Teichmann, D. Monitoring of cardiorespiratory signals using thermal imaging: A pilot study on healthy human subjects. Sensors 2018, 18, 1541. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Pereira, C.B.; Yu, X.; Blazek, V.; Venema, B.; Leonhardt, S. Multisensor Data Fusion for Enhanced Respiratory Rate Estimation in Thermal Videos. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1381–1384. [Google Scholar]
  61. Abbas, A.K.; Heimann, K.; Jergus, K.; Orlikowsky, T.; Leonhardt, S. Neonatal non-contact respiratory monitoring based on real-time infrared thermography. Biomed. Eng. Online 2011, 10, 93. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Pereira, C.B.; Heimann, K.; Venema, B.; Blazek, V.; Czaplik, M.; Leonhardt, S. Estimation of Respiratory Rate from Thermal Videos of Preterm Infants. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 3818–3821. [Google Scholar]
  63. Pereira, C.B.; Yu, X.; Goos, T.; Reiss, I.; Orlikowsky, T.; Heimann, K.; Venema, B.; Blazek, V.; Leonhardt, S.; Teichmann, D. Noncontact monitoring of respiratory rate in newborn infants using thermal imaging. IEEE Trans. Biomed. Eng. 2018, 66, 1105–1114. [Google Scholar] [CrossRef] [PubMed]
  64. Berg, A. Detection and Tracking in Thermal Infrared Imagery; Linköping University Electronic Press: Linköping, Sweden, 2016; p. 66. Available online: https://spie.org/Publications/Proceedings/Volume/11399?SSO=1 (accessed on 5 February 2021).
  65. Haghmohammadi, H.F.; Necsulescu, D.; Vahidi, M. Remote Measurement of Body Temperature for an Indoor Moving Crowd. In Proceedings of the 2018 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), Cluj-Napoca, Romania, 24–26 May 2018; pp. 1–6. [Google Scholar]
  66. Alkali, A.H. Evaluation of thresholding based noncontact respiration rate monitoring using thermal imaging. Evaluation 2017, 6, 50–58. [Google Scholar]
  67. Kopaczka, M.; Nestler, J.; Merhof, D. Face detection in thermal infrared images: A comparison of algorithm-and machine-learning-based approaches. In Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems, Antwerp, Belgium, 18–21 September 2017; pp. 518–529. [Google Scholar]
  68. Palmero, C.; Clapés, A.; Bahnsen, C.; Møgelmose, A.; Moeslund, T.B.; Escalera, S. Multi-modal rgb–depth–thermal human body segmentation. Int. J. Comput. Vis. 2016, 118, 217–239. [Google Scholar] [CrossRef] [Green Version]
  69. Kezebou, L.; Oludare, V.; Panetta, K.; Agaian, S. TR-GAN: Thermal to RGB Face Synthesis with Generative Adversarial Network for Cross-Modal Face Recognition. In Proceedings of the Mobile Multimedia/Image Processing, Security, and Applications 2020; p. 113990. Available online: https://spie.org/Publications/Proceedings/Volume/11399 (accessed on 5 February 2021).
  70. Chen, L.; Hu, M.; Liu, N.; Zhai, G.; Yang, S.X. Collaborative use of RGB and thermal imaging for remote breathing rate measurement under realistic conditions. Infrared Phys. Technol. 2020, 111, 103504. [Google Scholar] [CrossRef]
  71. Iranmanesh, S.M.; Dabouei, A.; Kazemi, H.; Nasrabadi, N.M. Deep cross Polarimetric Thermal-To-Visible Face Recognition. In Proceedings of the 2018 International Conference on Biometrics (ICB), Gold Coast, QLD, Australia, 20–23 February 2018; pp. 166–173. [Google Scholar]
  72. Zhang, T.; Wiliem, A.; Yang, S.; Lovell, B. Tv-gan: GENERATIVE Adversarial Network Based Thermal to Visible Face Recognition. In Proceedings of the 2018 International Conference on Biometrics (ICB), Gold Coast, QLD, Australia, 20–23 February 2018; pp. 174–181. [Google Scholar]
  73. Hu, S.; Short, N.J.; Riggan, B.S.; Gordon, C.; Gurton, K.P.; Thielke, M.; Gurram, P.; Chan, A.L. A Polarimetric Thermal Database for Face Recognition Research. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegus, NV, USA, 26 June–1 July 2016; pp. 119–126. [Google Scholar]
  74. Setjo, C.H.; Achmad, B. Thermal Image Human Detection Using Haar-Cascade Classifier. In Proceedings of the 2017 7th International Annual Engineering Seminar (InAES), Yogyakarta, Indonesia, 1–2 August 2017; pp. 1–6. [Google Scholar]
  75. Ivašić-Kos, M.; Krišto, M.; Pobar, M. Human Detection in Thermal Imaging Using YOLO. In Proceedings of the 2019 5th International Conference on Computer and Technology Applications, Istanbul, Turkey, 16–17 April 2019; pp. 20–24. [Google Scholar]
  76. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  77. Akula, A.; Ghosh, R.; Kumar, S.; Sardana, H. WignerMSER: Pseudo-Wigner Distribution Enriched MSER Feature Detector for Object Recognition in Thermal Infrared Images. IEEE Sens. J. 2019, 19, 4221–4228. [Google Scholar] [CrossRef]
  78. Miethig, B.; Liu, A.; Habibi, S.; Mohrenschildt, M.v. Leveraging Thermal Imaging for Autonomous Driving. In Proceedings of the 2019 IEEE Transportation Electrification Conference and Expo (ITEC), Detroit, MI, USA, 19–21 June 2019; pp. 1–5. [Google Scholar]
  79. Agrawal, K.; Subramanian, A. Enhancing Object Detection in Adverse Conditions using Thermal Imaging. arXiv 2019, arXiv:1909.13551. [Google Scholar]
  80. Malpani, S.; Asha, C.; Narasimhadhan, A. Thermal Vision Human Classification and Localization Using Bag of Visual Word. In Proceedings of the 2016 IEEE Region 10 Conference (TENCON), Singapore, 22–25 November 2016; pp. 3135–3139. [Google Scholar]
  81. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single Shot Multibox Detector. In European Conference on Computer Vision, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  82. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  83. Piniarski, K.; Pawłowski, P. Segmentation of Pedestrians in Thermal Imaging. In Proceedings of the 2018 Baltic URSI Symposium (URSI), Poznan, Poland, 14–17 May 2018; pp. 210–211. [Google Scholar]
  84. Sun, Y.; Zuo, W.; Liu, M. Rtfnet: Rgb-thermal fusion network for semantic segmentation of urban scenes. IEEE Robot. Autom. Lett. 2019, 4, 2576–2583. [Google Scholar] [CrossRef]
  85. Shivakumar, S.S.; Rodrigues, N.; Zhou, A.; Miller, I.D.; Kumar, V.; Taylor, C.J. Pst900: Rgb-Thermal Calibration, Dataset and Segmentation Network. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9441–9447. [Google Scholar]
  86. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  87. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE international conference on computer vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  88. Wang, T.; Zhang, T.; Lovell, B.C. EBIT: Weakly-supervised image translation with edge and boundary enhancement. Pattern Recognit. Lett. 2020, 138, 534–539. [Google Scholar] [CrossRef]
  89. Panetta, K.; Wan, Q.; Agaian, S.; Rajeev, S.; Kamath, S.; Rajendran, R.; Rao, S.; Kaszowska, A.; Taylor, H.; Samani, A. A comprehensive database for benchmarking imaging systems. Ieee Trans. Pattern Anal. Mach. Intell. 2018, 42, 509–520. [Google Scholar] [CrossRef]
  90. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial nets. arXiv 2014, arXiv:1406.2661. [Google Scholar]
  91. Kniaz, V.V.; Knyaz, V.A.; Hladuvka, J.; Kropatsch, W.G.; Mizginov, V. Thermalgan: Multimodal Color-to-Thermal Image Translation for Person re-Identification in Multispectral Dataset. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 1–19. [Google Scholar]
  92. Zhang, H.; Patel, V.M.; Riggan, B.S.; Hu, S. Generative Adversarial Network-Based Synthesis of Visible Faces from Polarimetrie Thermal Faces. In Proceedings of the 2017 IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 100–107. [Google Scholar]
  93. Thermal Imaging Systems (Infrared Thermographic Systems/Thermal Imaging Cameras). Available online: https://www.fda.gov/medical-devices/general-hospital-devices-and-supplies/thermal-imaging-systems-infrared-thermographic-systems-thermal-imaging-cameras (accessed on 16 December 2020).
  94. Enforcement Policy for Telethermographic Systems During the Coronavirus Disease 2019 (COVID-19) Public Health Emergency. Available online: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/enforcement-policy-telethermographic-systems-during-coronavirus-disease-2019-covid-19-public-health (accessed on 16 December 2020).
  95. Ring, F.J.; Ng, E.; Diakides, M.; Bronzino, J.; Peterson, D. Infrared thermal imaging standards for human fever detection. In Medical Infrared Imaging: Principles and Practices; CRC Press: Boca Raton, FL, USA, 2012; pp. 21–22. [Google Scholar]
  96. Ghassemi, P.; Pfefer, T.J.; Casamento, J.P.; Simpson, R.; Wang, Q. Best practices for standardized performance testing of infrared thermographs intended for fever screening. PLoS ONE 2018, 13, e0203302. [Google Scholar] [CrossRef] [Green Version]
  97. Howell, K.J.; Mercer, J.B.; Smith, R.E. Infrared thermography for mass fever screening: Repeating the mistakes of the past? J. Med. Virol. 2020, 30, 5–6. [Google Scholar]
  98. Temperature Screening White Paper. Available online: https://www.een.com/thermal-cameras-for-elevated-body-temperature-whitepaper/ (accessed on 16 December 2020).
  99. Kumar, A.; Sharma, K.; Singh, H.; Naugriya, S.G.; Gill, S.S.; Buyya, R. A drone-based networked system and methods for combating coronavirus disease (COVID-19) pandemic. Future Gener. Comput. Syst. 2020, 115, 1–19. [Google Scholar] [CrossRef]
  100. Jeffrey Dastin, K.H. Exclusive: Amazon Deploys Thermal Cameras at Warehouses to Scan for Fevers Faster. Available online: https://www.reuters.com/article/us-health-coronavirus-amazon-com-cameras/exclusive-amazon-deploys-thermal-cameras-at-warehouses-to-scan-for-fevers-faster-idUSKBN2200HT (accessed on 16 December 2020).
  101. Martín, H. Airports Are Testing Thermal Cameras and Other Technology to Screen Travelers for COVID-19. Available online: https://www.latimes.com/business/story/2020-05-13/airports-test-technology-screen-covid-19 (accessed on 16 December 2020).
  102. Incheon International Airport Chooses FLIR Thermal Imaging Cameras in Response to the COVID-19 Pandemic. Available online: https://www.flir.com/discover/incheon-international-airport-chooses-flir-thermal-imaging-cameras-in-response-to-the-covid-19-pandemic/ (accessed on 16 December 2020).
  103. Evans, S. Coronavirus: Canberra Airport Temperature Screening all Passengers and Staff. Available online: https://www.canberratimes.com.au/story/6744912/canberra-airport-ramps-up-covid-19-safety-measures/ (accessed on 16 December 2020).
  104. Chandran, R. Asian Cities Virus-Proof Public Transit with Smart Shelters, Thermal Scanners. Available online: https://www.weforum.org/agenda/2020/08/asian-cities-virus-proof-public-transit-with-smart-shelters-thermal-scanners/ (accessed on 16 December 2020).
  105. Temperature Screening for COVID-19 with Thermal Cameras or Scanners Is Increasing More and More in Hospitals, Airports, Public Transport. Available online: https://www.soraxteknoloji.com/en/2020/08/16/temperature-screening-for-covid-19-with-thermal-cameras-or-scanners-is-increasing-more-and-more-in-hospitals-airports-public-transport/ (accessed on 16 December 2020).
  106. ASU to Fight COVID-19 Spread with Thermal Imaging Tech. Available online: https://www.govtech.com/products/ASU-to-Fight-COVID-19-Spread-with-Thermal-Imaging-Tech.html (accessed on 16 December 2020).
  107. Gostic, K.M.; Gomez, A.C.; Mummah, R.O.; Kucharski, A.J.; Lloyd-Smith, J.O. Estimated effectiveness of traveller screening to prevent international spread of 2019 novel coronavirus (2019-nCoV). medRxiv 2020, 1–19. Available online: https://www.medrxiv.org/content/10.1101/2020.01.28.20019224v2 (accessed on 5 February 2021).
  108. Bitar, D.; Goubar, A.; Desenclos, J.-C. International travels and fever screening during epidemics: A literature review on the effectiveness and potential use of non-contact infrared thermometers. Eurosurveillance 2009, 14, 19115. [Google Scholar]
  109. Priest, P.C.; Duncan, A.R.; Jennings, L.C.; Baker, M.G. Thermal image scanning for influenza border screening: Results of an airport screening study. PLoS ONE 2011, 6, e14490. [Google Scholar] [CrossRef] [Green Version]
  110. Tay, M.; Low, Y.; Zhao, X.; Cook, A.; Lee, V. Comparison of Infrared Thermal Detection Systems for mass fever screening in a tropical healthcare setting. Public Health 2015, 129, 1471–1478. [Google Scholar] [CrossRef] [PubMed]
  111. Perera, A.G.; Khanam, F.-T.-Z.; Al-Naji, A.; Chahl, J. Detection and Localisation of Life Signs from the Air Using Image Registration and Spatio-Temporal Filtering. Remote Sens. 2020, 12, 577. [Google Scholar] [CrossRef] [Green Version]
  112. Al-Naji, A.; Chahl, J. Remote respiratory monitoring system based on developing motion magnification technique. Biomed. Signal. Process. Control 2016, 29, 1–10. [Google Scholar] [CrossRef]
  113. Buzzelli, M.; Albé, A.; Ciocca, G. A vision-based system for monitoring elderly people at home. Appl. Sci. 2020, 10, 374. [Google Scholar] [CrossRef] [Green Version]
  114. Thi, T.H.; Wang, L.; Ye, N.; Zhang, J.; Maurer-Stroh, S.; Cheng, L. Recognizing flu-like symptoms from videos. Bmc Bioinform. 2014, 15, 300. [Google Scholar] [CrossRef] [Green Version]
  115. Kvapilova, L.; Boza, V.; Dubec, P.; Majernik, M.; Bogar, J.; Jamison, J.; Goldsack, J.C.; Kimmel, D.J.; Karlin, D.R. Continuous Sound Collection Using Smartphones and Machine Learning to Measure Cough. Digit. Biomark. 2019, 3, 166–175. [Google Scholar] [CrossRef]
  116. Poh, M.-Z.; McDuff, D.J.; Picard, R.W. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Opt. Express 2010, 18, 10762–10774. [Google Scholar] [CrossRef]
  117. Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. Cvpr. (1) 2001, 1, 3. [Google Scholar]
  118. Pursche, T.; Krajewski, J.; Moeller, R. Video-Based Heart Rate Measurement from Human Faces. In Proceedings of the 2012 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 12–15 January 2012; pp. 544–545. [Google Scholar]
  119. Feng, L.; Po, L.-M.; Xu, X.; Li, Y. Motion Artifacts Suppression for Remote Imaging Photoplethysmography. In Proceedings of the 2014 19th International Conference on Digital Signal Processing, Hong Kong, China, 20–23 August 2014; pp. 18–23. [Google Scholar]
  120. Lewandowska, M.; Rumiński, J.; Kocejko, T.; Nowak, J. Measuring Pulse Rate with a Webcam—A Non-Contact Method for Evaluating Cardiac Activity. In Proceedings of the 2011 federated conference on computer science and information systems (FedCSIS), Szczecin, Poland, 18–21 September 2011; pp. 405–410. [Google Scholar]
  121. Bousefsaf, F.; Maaoui, C.; Pruski, A. Continuous wavelet filtering on webcam photoplethysmographic signals to remotely assess the instantaneous heart rate. Biomed. Signal. Process. Control 2013, 8, 568–574. [Google Scholar] [CrossRef]
  122. Wu, B.-F.; Huang, P.-W.; Tsou, T.-Y.; Lin, T.-M.; Chung, M.-L. Camera-Based Heart Rate Measurement Using Continuous Wavelet transform. In Proceedings of the 2017 International Conference on System Science and Engineering (ICSSE), Ho Chi Minh City, Vietnam, 21–23 July 2017; pp. 7–11. [Google Scholar]
  123. Wu, B.-F.; Huang, P.-W.; Lin, C.-H.; Chung, M.-L.; Tsou, T.-Y.; Wu, Y.-L. Motion resistant image-photoplethysmography based on spectral peak tracking algorithm. IEEE Access 2018, 6, 21621–21634. [Google Scholar] [CrossRef]
  124. Feng, L.; Po, L.-M.; Xu, X.; Li, Y.; Ma, R. Motion-resistant remote imaging photoplethysmography based on the optical properties of skin. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 879–891. [Google Scholar] [CrossRef]
  125. Cheng, J.; Chen, X.; Xu, L.; Wang, Z.J. Illumination variation-resistant video-based heart rate measurement using joint blind source separation and ensemble empirical mode decomposition. IEEE J. Biomed. Health Inform. 2017, 21, 1422–1433. [Google Scholar] [CrossRef]
  126. Xu, L.; Cheng, J.; Chen, X. Illumination variation interference suppression in remote PPG using PLS and MEMD. Electron. Lett. 2017, 53, 216–218. [Google Scholar] [CrossRef]
  127. Chen, D.-Y.; Wang, J.-J.; Lin, K.-Y.; Chang, H.-H.; Wu, H.-K.; Chen, Y.-S.; Lee, S.-Y. Image sensor-based heart rate evaluation from face reflectance using Hilbert–Huang transform. IEEE Sens. J. 2015, 15, 618–627. [Google Scholar] [CrossRef]
  128. Lin, K.-Y.; Chen, D.-Y.; Tsai, W.-J. Face-based heart rate signal decomposition and evaluation using multiple linear regression. IEEE Sens. J. 2016, 16, 1351–1360. [Google Scholar] [CrossRef]
  129. Lee, D.; Kim, J.; Kwon, S.; Park, K. Heart Rate Estimation from Facial Photoplethysmography during Dynamic Illuminance Changes. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2758–2761. [Google Scholar]
  130. Tarassenko, L.; Villarroel, M.; Guazzi, A.; Jorge, J.; Clifton, D.; Pugh, C. Non-contact video-based vital sign monitoring using ambient light and auto-regressive models. Physiol. Meas. 2014, 35, 807. [Google Scholar] [CrossRef] [PubMed]
  131. Al-Naji, A.; Chahl, J.; Lee, S.-H. Cardiopulmonary signal acquisition from different regions using video imaging analysis. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2018, 1–15. [Google Scholar] [CrossRef]
  132. Aarts, L.A.; Jeanne, V.; Cleary, J.P.; Lieber, C.; Nelson, J.S.; Oetomo, S.B.; Verkruysse, W. Non-contact heart rate monitoring utilizing camera photoplethysmography in the neonatal intensive care unit—A pilot study. Early Hum. Dev. 2013, 89, 943–948. [Google Scholar] [CrossRef]
  133. Cobos-Torres, J.-C.; Abderrahim, M.; Martínez-Orgado, J. Non-Contact, Simple Neonatal Monitoring by Photoplethysmography. Sensors 2018, 18, 4362. [Google Scholar] [CrossRef] [Green Version]
  134. Gibson, K.; Al-Naji, A.; Fleet, J.; Steen, M.; Esterman, A.; Chahl, J.; Huynh, J.; Morris, S. Non-contact heart and respiratory rate monitoring of preterm infants based on a computer vision system: A method comparison study. Pediatric. Res. 2019, 86, 738–741. [Google Scholar] [CrossRef]
  135. De Haan, G.; Jeanne, V. Robust pulse rate from chrominance-based rPPG. IEEE Trans. Biomed. Eng. 2013, 60, 2878–2886. [Google Scholar] [CrossRef] [PubMed]
  136. De Haan, G.; Van Leest, A. Improved motion robustness of remote-PPG by using the blood volume pulse signature. Physiol. Meas. 2014, 35, 1913. [Google Scholar] [CrossRef] [PubMed]
  137. Wang, W.; Stuijk, S.; De Haan, G. A novel algorithm for remote photoplethysmography: Spatial subspace rotation. IEEE Trans. Biomed. Eng. 2016, 63, 1974–1984. [Google Scholar] [CrossRef] [PubMed]
  138. Wang, W.; den Brinker, A.C.; Stuijk, S.; de Haan, G. Algorithmic principles of remote PPG. IEEE Trans. Biomed. Eng. 2017, 64, 1479–1491. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  139. Wang, W.; den Brinker, A.C.; Stuijk, S.; de Haan, G. Robust heart rate from fitness videos. Physiol. Meas. 2017, 38, 1023. [Google Scholar] [CrossRef] [Green Version]
  140. Yu, S.; Hu, S.; Azorin-Peris, V.; Chambers, J.A.; Zhu, Y.; Greenwald, S.E. Motion-compensated noncontact imaging photoplethysmography to monitor cardiorespiratory status during exercise. J. Biomed. Opt. 2011, 16, 077010. [Google Scholar]
  141. Kwon, S.; Kim, H.; Park, K.S. Validation of Heart Rate Extraction Using Video Imaging on a Built-In Camera System of a Smartphone. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–2 September 2012; pp. 2174–2177. [Google Scholar]
  142. Bernacchia, N.; Scalise, L.; Casacanditella, L.; Ercoli, I.; Marchionni, P.; Tomasini, E.P. Non Contact Measurement of Heart and Respiration Rates Based on Kinect™. In Proceedings of the 2014 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Lisbon, Portugal, 11–12 June 2014; pp. 1–5. [Google Scholar]
  143. Smilkstein, T.; Buenrostro, M.; Kenyon, A.; Lienemann, M.; Larson, G. Heart Rate Monitoring Using Kinect and Color Amplification. In Proceedings of the 2014 IEEE Healthcare Innovation Conference (HIC), Seattle, WA, USA, 8–10 October 2014; pp. 60–62. [Google Scholar]
  144. Gambi, E.; Agostinelli, A.; Belli, A.; Burattini, L.; Cippitelli, E.; Fioretti, S.; Pierleoni, P.; Ricciuti, M.; Sbrollini, A.; Spinsante, S. Heart rate detection using microsoft kinect: Validation and comparison to wearable devices. Sensors 2017, 17, 1776. [Google Scholar] [CrossRef]
  145. Al-Naji, A.; Perera, A.G.; Chahl, J. Remote Measurement of Cardiopulmonary Signal Using an Unmanned Aerial Vehicle. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Zawiercie, Poland, 26–29 September 2018; p. 012001. [Google Scholar]
  146. Al-Naji, A.; Chahl, J. Remote Optical Cardiopulmonary Signal Extraction With Noise Artifact Removal, Multiple Subject Detection & Long-Distance. IEEE Access 2018, 6, 11573–11595. [Google Scholar]
  147. Gupta, O.; McDuff, D.; Raskar, R. Real-Time Physiological Measurement and Visualization Using a Synchronized Multi-Camera System. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 46–53. [Google Scholar]
  148. Hu, M.-H.; Zhai, G.-T.; Li, D.; Fan, Y.-Z.; Chen, X.-H.; Yang, X.-K. Synergetic use of thermal and visible imaging techniques for contactless and unobtrusive breathing measurement. J. Biomed. Opt. 2017, 22, 036006. [Google Scholar] [CrossRef]
  149. Hu, M.; Zhai, G.; Li, D.; Fan, Y.; Duan, H.; Zhu, W.; Yang, X. Combination of near-infrared and thermal imaging techniques for the remote and simultaneous measurements of breathing and heart rates under sleep situation. PLoS ONE 2018, 13, e0190466. [Google Scholar] [CrossRef] [Green Version]
  150. Bennett, S.; El Harake, T.N.; Goubran, R.; Knoefel, F. Adaptive Eulerian Video Processing of Thermal Video: An Experimental Analysis. IEEE Trans. Instrum. Meas. 2017, 66, 2516–2524. [Google Scholar] [CrossRef]
  151. Wieringa, F.P.; Mastik, F.; van der Steen, A.F. Contactless multiple wavelength photoplethysmographic imaging: A first step toward “SpO 2 camera” technology. Ann. Biomed. Eng. 2005, 33, 1034–1041. [Google Scholar] [CrossRef]
  152. Humphreys, K.; Ward, T.; Markham, C. Noncontact simultaneous dual wavelength photoplethysmography: A further step toward noncontact pulse oximetry. Rev. Sci. Instrum. 2007, 78, 044304. [Google Scholar] [CrossRef] [PubMed]
  153. Humphreys, K.; Ward, T.; Markham, C. A CMOS Camera-Based Pulse Oximetry Imaging System. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 1–4 September 2005; pp. 3494–3497. [Google Scholar]
  154. Kong, L.; Zhao, Y.; Dong, L.; Jian, Y.; Jin, X.; Li, B.; Feng, Y.; Liu, M.; Liu, X.; Wu, H. Non-contact detection of oxygen saturation based on visible light imaging device using ambient light. Opt. Express 2013, 21, 17464–17471. [Google Scholar] [CrossRef]
  155. Shao, D.; Liu, C.; Tsow, F.; Yang, Y.; Du, Z.; Iriya, R.; Yu, H.; Tao, N. Noncontact monitoring of blood oxygen saturation using camera and dual-wavelength imaging system. IEEE Trans. Biomed. Eng. 2015, 63, 1091–1098. [Google Scholar] [CrossRef] [PubMed]
  156. Verkruysse, W.; Bartula, M.; Bresch, E.; Rocque, M.; Meftah, M.; Kirenko, I. Calibration of contactless pulse oximetry. Anesth. Analg. 2017, 124, 136. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  157. Moço, A.; Verkruysse, W. Pulse oximetry based on photoplethysmography imaging with red and green light. J. Clin. Monit. Comput. 2020, 2020, 1–11. [Google Scholar] [CrossRef] [PubMed]
  158. Van Gastel, M.; Stuijk, S.; De Haan, G. New principle for measuring arterial blood oxygenation, enabling motion-robust remote monitoring. Sci. Rep. 2016, 6, 1–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  159. Bal, U. Non-contact estimation of heart rate and oxygen saturation using ambient light. Biomed. Opt. Express 2015, 6, 86–97. [Google Scholar] [CrossRef] [Green Version]
  160. Guazzi, A.R.; Villarroel, M.; Jorge, J.; Daly, J.; Frise, M.C.; Robbins, P.A.; Tarassenko, L. Non-contact measurement of oxygen saturation with an RGB camera. Biomed. Opt. Express 2015, 6, 3320–3338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  161. Rosa, A.F.; Betini, R.C. Noncontact SpO2 measurement using Eulerian video magnification (EVM). IEEE Trans. Instrum. Meas. 2019, 69, 2120–2130. [Google Scholar] [CrossRef]
  162. Matos, S.; Birring, S.S.; Pavord, I.D.; Evans, H. Detection of cough signals in continuous audio recordings using hidden Markov models. IEEE Trans. Biomed. Eng. 2006, 53, 1078–1083. [Google Scholar] [CrossRef] [PubMed]
  163. Coyle, M.A.; Keenan, D.B.; Henderson, L.S.; Watkins, M.L.; Haumann, B.K.; Mayleben, D.W.; Wilson, M.G. Evaluation of an ambulatory system for the quantification of cough frequency in patients with chronic obstructive pulmonary disease. Cough 2005, 1, 3. [Google Scholar] [CrossRef] [Green Version]
  164. Drugman, T.; Urbain, J.; Dutoit, T. Assessment of Audio Features for Automatic Cough Detection. In Proceedings of the 2011 19th European Signal Processing Conference, Barcelona, Spain, 29 August–2 September 2011; pp. 1289–1293. [Google Scholar]
  165. Amoh, J.; Odame, K. Deep neural networks for identifying cough sounds. IEEE Trans. Biomed. Circuits Syst. 2016, 10, 1003–1011. [Google Scholar] [CrossRef] [PubMed]
  166. Hoyos-Barcelo, C.; Monge-Alvarez, J.; Shakir, M.Z.; Alcaraz-Calero, J.-M.; Casaseca-de-La-Higuera, P. Efficient k-NN implementation for real-time detection of cough events in smartphones. IEEE J. Biomed. Health Inform. 2017, 22, 1662–1671. [Google Scholar] [CrossRef] [Green Version]
  167. Ahmed, M.Y.; Rahman, M.M.; Kuang, J. DeepLung: Smartphone Convolutional Neural Network-based Inference of Lung Anomalies for Pulmonary Patients. In Proceedings of the Interspeech 2019, Graz, Austria, 15–19 September 2019; pp. 2335–2339. [Google Scholar]
  168. Casaseca-de-la-Higuera, P.; Lesso, P.; McKinstry, B.; Pinnock, H.; Rabinovich, R.; McCloughan, L.; Monge-Álvarez, J. Effect of Downsampling and Compressive Sensing on Audio-Based Continuous Cough Monitoring. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 6231–6235. [Google Scholar]
  169. Larson, E.C.; Lee, T.; Liu, S.; Rosenfeld, M.; Patel, S.N. Accurate and Privacy Preserving Cough Sensing Using a Low-Cost Microphone. In Proceedings of the 13th International Conference on Ubiquitous Computing, Beijing, China, 17–21 September 2011; pp. 375–384. [Google Scholar]
  170. Monge-Álvarez, J.; Hoyos-Barceló, C.; San-José-Revuelta, L.M.; Casaseca-de-la-Higuera, P. A machine hearing system for robust cough detection based on a high-level representation of band-specific audio features. IEEE Trans. Biomed. Eng. 2018, 66, 2319–2330. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  171. Barata, F.; Kipfer, K.; Weber, M.; Tinschert, P.; Fleisch, E.; Kowatsch, T. Towards Device-Agnostic Mobile Cough Detection with Convolutional Neural Networks. In Proceedings of the 2019 IEEE International Conference on Healthcare Informatics (ICHI), Xi’an, China, 10–13 June 2019; pp. 1–11. [Google Scholar]
  172. Smith, J. Ambulatory methods for recording cough. Pulm. Pharmacol. Ther. 2007, 20, 313–318. [Google Scholar] [CrossRef]
  173. You, M.; Liu, Z.; Chen, C.; Liu, J.; Xu, X.-H.; Qiu, Z.-M. Cough detection by ensembling multiple frequency subband features. Biomed. Signal. Process. Control 2017, 33, 132–140. [Google Scholar] [CrossRef]
  174. Sterling, M.; Rhee, H.; Bocko, M. Automated cough assessment on a mobile platform. J. Med. Eng. 2014, 2014, 1–9. [Google Scholar] [CrossRef]
  175. Krajnik, M.; Damps-Konstanska, I.; Gorska, L.; Jassem, E. A portable automatic cough analyser in the ambulatory assessment of cough. Biomed. Eng. Online 2010, 9, 17. [Google Scholar] [CrossRef] [Green Version]
  176. You, M.; Wang, H.; Liu, Z.; Chen, C.; Liu, J.; Xu, X.-H.; Qiu, Z.-M. Novel feature extraction method for cough detection using NMF. IET Signal. Process. 2017, 11, 515–520. [Google Scholar] [CrossRef]
  177. Wang, H.-H.; Liu, J.-M.; You, M.; Li, G.-Z. Audio Signals Encoding for Cough Classification Using Convolutional Neural Networks: A Comparative Study. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 9–12 November 2015; pp. 442–445. [Google Scholar]
  178. Sharan, R.V.; Abeyratne, U.R.; Swarnkar, V.R.; Porter, P. Automatic croup diagnosis using cough sound recognition. IEEE Trans. Biomed. Eng. 2018, 66, 485–495. [Google Scholar] [CrossRef] [PubMed]
  179. Salamon, J.; Bello, J.P. Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal. Process. Lett. 2017, 24, 279–283. [Google Scholar] [CrossRef]
  180. Tschannen, M.; Kramer, T.; Marti, G.; Heinzmann, M.; Wiatowski, T. Heart Sound Classification Using Deep Structured Features. In Proceedings of the 2016 Computing in Cardiology Conference (CinC), Vancouver, BC, Canada, 11–14 September 2014; pp. 565–568. [Google Scholar]
  181. Amiriparian, S.; Gerczuk, M.; Ottl, S.; Cummins, N.; Freitag, M.; Pugachevskiy, S.; Baird, A.; Schuller, B.W. Snore Sound Classification Using Image-Based Deep Spectrum Features. In Proceedings of the Interspeech 2017, Stockholm, Sweden, 20–24 August 2017; pp. 3512–3516. [Google Scholar]
  182. Kahl, S.; Wilhelm-Stein, T.; Hussein, H.; Klinck, H.; Kowerko, D.; Ritter, M.; Eibl, M. Large-Scale Bird Sound Classification using Convolutional Neural Networks. In Proceedings of the CLEF 2017 (Working Notes), Dublin, Ireland, 11–14 September 2017. [Google Scholar]
  183. Sak, H.; Senior, A.; Beaufays, F. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv 2014, arXiv:1402.1128. [Google Scholar]
  184. Murata, A.; Taniguchi, Y.; Hashimoto, Y.; Kaneko, Y.; Takasaki, Y.; Kudoh, S. Discrimination of productive and non-productive cough by sound analysis. Intern. Med. 1998, 37, 732–735. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  185. Shahroudy, A.; Liu, J.; Ng, T.-T.; Wang, G. Ntu rgb+ d: A Large Scale Dataset for 3D Human Activity Analysis. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1010–1019. [Google Scholar]
  186. Rahmani, H.; Mahmood, A.; Huynh, D.; Mian, A. Histogram of oriented principal components for cross-view action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2430–2443. [Google Scholar] [CrossRef] [Green Version]
  187. Smith, J.A.; Earis, J.E.; Woodcock, A.A. Establishing a gold standard for manual cough counting: Video versus digital audio recordings. Cough 2006, 2, 6. [Google Scholar] [CrossRef] [Green Version]
  188. Protalinski, E. How Draganfly Brought a ‘Pandemic Drone’ to the U.S. Available online: https://venturebeat.com/2020/04/30/draganfly-pandemic-drone-united-states-pilots/ (accessed on 8 May 2020).
  189. McNabb, M. This “Pandemic Drone” Can See Who Has a Temperature—Or Coughs. Available online: https://dronelife.com/2020/03/31/this-pandemic-drone-can-see-who-has-a-temperature-or-coughs/ (accessed on 8 May 2020).
Figure 1. Thermal image captured by thermal camera.
Figure 1. Thermal image captured by thermal camera.
Jimaging 07 00028 g001
Figure 2. The non-contact body temperature measurement framework proposed in [41].
Figure 2. The non-contact body temperature measurement framework proposed in [41].
Jimaging 07 00028 g002
Figure 3. The thermal camera-based continuous body temperature measurement framework proposed in [44].
Figure 3. The thermal camera-based continuous body temperature measurement framework proposed in [44].
Jimaging 07 00028 g003
Figure 4. Block diagram of fever screening process proposed in [46].
Figure 4. Block diagram of fever screening process proposed in [46].
Jimaging 07 00028 g004
Figure 5. Block diagram of the cardiac pulse measurement proposed in [53].
Figure 5. Block diagram of the cardiac pulse measurement proposed in [53].
Jimaging 07 00028 g005
Figure 6. Block diagram for the BR estimation proposed in [58,59].
Figure 6. Block diagram for the BR estimation proposed in [58,59].
Jimaging 07 00028 g006
Figure 7. Block diagram of HR estimation proposed in [59].
Figure 7. Block diagram of HR estimation proposed in [59].
Jimaging 07 00028 g007
Figure 8. Schematic of experimental set up used in [61] to monitor BR of neonates using IR thermal camera. (1) Radiant warmer bed, (2) bedside monitor, (3) camera field of view (FOV), (4) IR thermal camera, (5) analysis workstation, and (6) infant under NIRT imaging.
Figure 8. Schematic of experimental set up used in [61] to monitor BR of neonates using IR thermal camera. (1) Radiant warmer bed, (2) bedside monitor, (3) camera field of view (FOV), (4) IR thermal camera, (5) analysis workstation, and (6) infant under NIRT imaging.
Jimaging 07 00028 g008
Figure 9. Sample cross-modality images of three domains from the Tufts Face Database [89]. (a) Visible light, (b) thermal, and (c) near infrared (NIR).
Figure 9. Sample cross-modality images of three domains from the Tufts Face Database [89]. (a) Visible light, (b) thermal, and (c) near infrared (NIR).
Jimaging 07 00028 g009
Figure 10. The thermal image presented in [98]. The face area is detected using a face detection algorithm and a point on the forehead (red color “+” mark) is selected for temperature measurement. Image courtesy of een.com.
Figure 10. The thermal image presented in [98]. The face area is detected using a face detection algorithm and a point on the forehead (red color “+” mark) is selected for temperature measurement. Image courtesy of een.com.
Jimaging 07 00028 g010
Figure 11. Airports are testing thermal cameras and other technology to screen travelers for COVID-19. (a) The thermal screening setup at Incheon International Airport [102], (b) a similar setup at Canberra international airport [103], and (c) thermal camera setup in public transport [105].
Figure 11. Airports are testing thermal cameras and other technology to screen travelers for COVID-19. (a) The thermal screening setup at Incheon International Airport [102], (b) a similar setup at Canberra international airport [103], and (c) thermal camera setup in public transport [105].
Jimaging 07 00028 g011
Figure 12. Different sensors used as a video camera. (a) Webcam, (b) digital camera, (c) smart phone, (d) Microsoft Kinect, and (e) UAV.
Figure 12. Different sensors used as a video camera. (a) Webcam, (b) digital camera, (c) smart phone, (d) Microsoft Kinect, and (e) UAV.
Jimaging 07 00028 g012
Figure 13. Block diagram of non-contact HR monitoring system proposed in [116].
Figure 13. Block diagram of non-contact HR monitoring system proposed in [116].
Jimaging 07 00028 g013
Figure 14. Block diagram of noncontact HR monitoring system proposed in [125].
Figure 14. Block diagram of noncontact HR monitoring system proposed in [125].
Jimaging 07 00028 g014
Figure 15. Block diagram of non-contact vital sign monitoring system proposed in [127].
Figure 15. Block diagram of non-contact vital sign monitoring system proposed in [127].
Jimaging 07 00028 g015
Figure 16. Video recorded by the UAV (3DR solo drone) [145].
Figure 16. Video recorded by the UAV (3DR solo drone) [145].
Jimaging 07 00028 g016
Figure 17. Scenarios for remote optical detection of vital signs [146].
Figure 17. Scenarios for remote optical detection of vital signs [146].
Jimaging 07 00028 g017
Figure 18. Block diagram of non-contact vital sign monitoring system proposed in [146].
Figure 18. Block diagram of non-contact vital sign monitoring system proposed in [146].
Jimaging 07 00028 g018
Figure 19. Cough detection using drones. Image courtesy of Draganfly Innovations Inc.
Figure 19. Cough detection using drones. Image courtesy of Draganfly Innovations Inc.
Jimaging 07 00028 g019
Table 1. Different studies to extract body temperature using thermal cameras.
Table 1. Different studies to extract body temperature using thermal cameras.
RefSensor UsedROIUsed TechniqueTemperature Measured
Bilodeau et al. [41]7.5–13 μm, LWIR
(FLIR ThermoVision A40M)
FaceParticle filter, Kalman filter-
Aubakir et al. [42]8–14 μm, LWIR
(FLIR Lepton 2.5)
ForeheadV-J method34.95 °C to 37.00 °C
Sharma et al. [43]7.5–13 μm, LWIR
(FLIR X63900)
NIR
(CP-PLUS CP-USC-TAL2)
FaceV-J method29.45 °C to 32.82 °C
Lin et al. [44]8–14 μm, LWIR
(FLIR Lepton 2.5)
8–14 μm, LWIR
(KeySight Keysight U5855A)
ForeheadDeep- learning27 °C to 37.5 °C
Sumriddetchkajorn et al. [45]7.5–13 μm, LWIR
(FLIR ThermoVision A40M)
FaceImage filtering, particle analysis35 °C to 40 °C
Silawan et al. [46]8–14 μm, LWIR
(Optris PI450)
Forehead, mouth, cheekMultiple data comparison 36.0 °C to 39.5 °C
Thomas et al. [47]7.5–14 μm, LWIR
(Fluke TiS65)
FaceLinear regression34 °C to 41 °C
Note: NIR = Near infrared, LWIR = Long wavelength infrared.
Table 2. Different studies to monitor vital signs using thermal cameras.
Table 2. Different studies to monitor vital signs using thermal cameras.
RefSensor UsedVital SignsROIUsed TechniqueResult
Murthy et al. [50]3–5 μm, MWIR
(hardware unspecified)
BRNoseAdvanced statistical algorithmAccuracy = 98.5%
Fei et al. [51,52]3–5 μm, MWIR
(FLIR model unspecified)
3–5 μm, MWIR
(Indigo Systems model unspecified)
BRNoseOptical bandpass filter-
Sun et al. [53]MWIR
(Indigo Systems model unspecified)
HRForehead, neck and wristFFTPCC = 0.994
Garbey et al. [54]MWIR
Indigo camera
(Indigo Systems model unspecified)
HRForehead, neck and wristFFTCAND = 88.52%
Chekmenev et al. [55]LWIR
(FLIR model unspecified)
HR and BRFace and neckCWT-
Fei et al. [56]3–5 μm, MWIR
(FLIR SC6000)
BRNoseCWTCAND = 98.27%
Shakhih et al. [57]7–14 μm, LWIR
(Infrared Camera Incorporation 7640 P-series)
TI and TENoseMean pixel intensityPCC = 0.796 (TI), 0.961 (TE)
Pereira et al. [58]7.5–14 μm, LWIR
(VarioCAM R HD head 820S/30)
BRNoseParticle filter framework and temporal filteringMAE = 0.33, 0.55 and 0.96
breaths/min.
Pereira et al. [59]2–5.5 μm, MWIR
(InfraTec 9300)
HR and BRHead and noseParticle filter framework, temporal filtering and PCARMSE = 3 bpm (HR), RMSE = 0.7 breaths/min.
Pereira et al. [60]7.5–14 μm, LWIR
(VarioCAM R HD head 820S/30)
BRNose, mouth, shouldersParticle filter framework and signal fusionRMSE = 0.24,0. 89 breaths/min.
Abbas et al. [61]1–14 μm, LWIR
(VarioCAM HR head)
BRNoseCWT-
Pereira et al. [62]7.5–14 μm, LWIR
(VarioCAM R HD head 820S/30)
BRNoseParticle filter framework and temporal filteringRelative error = 3.42%
Pereira et al. [63]7.5–14 μm, LWIR
(VarioCAM R HD head 820S/30)
BR-Black-boxRMSE = 4.15 ± 1.44 breaths/min.
Note: MWIR = Mid wavelength infrared, LWIR = Long wavelength infrared, MAE = Mean absolute error, RMSE = Root mean square error, CAND = Complement of the absolute normalized difference.
Table 3. Different studies to monitor vital signs using video camera technology.
Table 3. Different studies to monitor vital signs using video camera technology.
RefSensor UsedVital SignsROIUsed TechniqueResults
Pho et al. [116]WebcamHRFaceICAPCC = 0.95,
RMSE = 4.63 bpm
Purche et al. [118]WebcamHRForehead, nose and mouthICA
Feng et al. [119]WebcamHRForeheadICAPCC = 0.99
Lewandoska et al. [120]WebcamHRFace and foreheadPCA
Bousefsaf et al. [121]WebcamHRFaceCWT
Wu et al. [122]WebcamHRFaceCWTSNR (dB) = −3.01
Wu et al. [123]WebcamHRCheeksMRSPTRMSE = 6.44 bpm
Feng et al. [124]WebcamHRCheeksGRDPCC = 1
Cheng et al. [125]WebcamHRFaceJBSS + EEMDPCC = 0.91
Xu et al. [126]WebcamHRFacePLS + MEMDPCC = 0.81
Chen et al. [127]Digital cameraHRBrow areaEEMDPCC = 0.91
Lin et al. [128]Digital cameraHRBrow areaEEMD + MLRPCC = 0.96
Lee et al. [129]Digital CameraHRCheekMOCFRMSE = 1.8 bpm
Tarassenko et al. [130]Digital cameraHR, RR,
SpO2
Forehead and cheekAR modelling and pole cancellationMAE = 3 bpm
Al-Naji et al. [131]Digital cameraHR and RRFace, palm, wrist, arm, neck, leg, forehead, head and chestEEMD + ICAPCC = 0.96,
RMSE = 3.52
Arts et al. [132]Digital cameraHRFace and CheekJFTD-
Cobos-Torres et al. [133]Digital cameraHRAbdominal areaStack FIFOPCC = 0.94
Gibson et al. [134]Digital cameraHR and RRFace and chestEVMMean bias = 4.5 bpm
De Haan et al. [135]CCDHRFaceCHROMPCC = 1,
RMSE = 0.5
De Haan et al. [136]CCDHRFacePBVPCC = 0.99,
RMSE = 0.64
Wang et al. [137]CCDHRFace and forehead2SRPCC = 0.94
Wang et al. [138]CCDHRFacePOSSNR (dB) = 5.16
Wang et al. [139]CCDHRFaceSub-band decompositionSNR (dB) = 4.77
Yu et al. [140]CMOSHR and RRPalm and faceSCICAPCC = 0.9
Kwon et al. [141]SmartphoneHRFaceICAMAE = 1.47 bpm
Bernacchia et al. [142]Microsoft KinectHR and RRNeck, thorax and abdominal areaICAPCC = 0.91
Smilkstein et al. [143]Microsoft KinectHRFaceEVM-
Gambi et al. [144]Microsoft KinectHRForehead, cheeks, neck,EVMRMSE = 2.2 bpm
Al-Naji et al. [145]UAVHR and RRFaceCEEMD + ICAPCC = 0.99,
RMSE= 0.7 bpm
Al-Naji et al. [146]Digital camera, UAVHR and RRFace and ForeheadCEEMDAN + CCAPCC = 0.99,
RMSE= 0.89 bpm
Table 4. Different studies to monitor vital signs using combined technology.
Table 4. Different studies to monitor vital signs using combined technology.
RefSensor UsedVital SignsROIUsed TechniqueResult
Gupta et al. [147]RGB, monochrome and thermal cameraHR and HRVCheeks and foreheadICAError = 4.62%
Hu et al. [148]RGB and thermal cameraBRnose and mouthViola–Jones algorithm together with the screening techniqueLCC = 0.971
Hu et al. [149]RGB infrared and thermal cameraHR and BRMouth and
nose regions
Moving average filterLCC =0.831 (BR), LCC = 0.933 (HR)
Bennett et al. [150]Thermal and digital cameraHR and blood perfusionFace and armEVM-
Note: LCC = Linear correlation coefficient.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khanam, F.-T.-Z.; Chahl, L.A.; Chahl, J.S.; Al-Naji, A.; Perera, A.G.; Wang, D.; Lee, Y.H.; Ogunwa, T.T.; Teague, S.; Nguyen, T.X.B.; et al. Noncontact Sensing of Contagion. J. Imaging 2021, 7, 28. https://doi.org/10.3390/jimaging7020028

AMA Style

Khanam F-T-Z, Chahl LA, Chahl JS, Al-Naji A, Perera AG, Wang D, Lee YH, Ogunwa TT, Teague S, Nguyen TXB, et al. Noncontact Sensing of Contagion. Journal of Imaging. 2021; 7(2):28. https://doi.org/10.3390/jimaging7020028

Chicago/Turabian Style

Khanam, Fatema-Tuz-Zohra, Loris A. Chahl, Jaswant S. Chahl, Ali Al-Naji, Asanka G. Perera, Danyi Wang, Y.H. Lee, Titilayo T. Ogunwa, Samuel Teague, Tran Xuan Bach Nguyen, and et al. 2021. "Noncontact Sensing of Contagion" Journal of Imaging 7, no. 2: 28. https://doi.org/10.3390/jimaging7020028

APA Style

Khanam, F. -T. -Z., Chahl, L. A., Chahl, J. S., Al-Naji, A., Perera, A. G., Wang, D., Lee, Y. H., Ogunwa, T. T., Teague, S., Nguyen, T. X. B., McIntyre, T. D., Pegoli, S. P., Tao, Y., McGuire, J. L., Huynh, J., & Chahl, J. (2021). Noncontact Sensing of Contagion. Journal of Imaging, 7(2), 28. https://doi.org/10.3390/jimaging7020028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop