Next Article in Journal
Applying Sensor-Based Technology to Improve Construction Safety Management
Next Article in Special Issue
An Objective Balance Error Scoring System for Sideline Concussion Evaluation Using Duplex Kinect Sensors
Previous Article in Journal
Amalgam Electrode-Based Electrochemical Detector for On-Site Direct Determination of Cadmium(II) and Lead(II) from Soils
Previous Article in Special Issue
Cataract Surgery Performed by High Frequency LDV Z8 Femtosecond Laser: Safety, Efficacy, and Its Physical Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time External Respiratory Motion Measuring Technique Using an RGB-D Camera and Principal Component Analysis †

School of Computer Science and Engineering, Kyungpook National University, 80 Daehakro, Bukgu, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the Wijenayake, U.; Park, S.Y. PCA based analysis of external respiratory motion using an RGB-D camera. In Proceedings of the IEEE International Symposium on Medical Measurements & Applications (MeMeA), Benevento, Italy, 15–18 May 2016.
Sensors 2017, 17(8), 1840; https://doi.org/10.3390/s17081840
Submission received: 30 May 2017 / Revised: 7 August 2017 / Accepted: 7 August 2017 / Published: 9 August 2017

Abstract

:
Accurate tracking and modeling of internal and external respiratory motion in the thoracic and abdominal regions of a human body is a highly discussed topic in external beam radiotherapy treatment. Errors in target/normal tissue delineation and dose calculation and the increment of the healthy tissues being exposed to high radiation doses are some of the unsolicited problems caused due to inaccurate tracking of the respiratory motion. Many related works have been introduced for respiratory motion modeling, but a majority of them highly depend on radiography/fluoroscopy imaging, wearable markers or surgical node implanting techniques. We, in this article, propose a new respiratory motion tracking approach by exploiting the advantages of an RGB-D camera. First, we create a patient-specific respiratory motion model using principal component analysis (PCA) removing the spatial and temporal noise of the input depth data. Then, this model is utilized for real-time external respiratory motion measurement with high accuracy. Additionally, we introduce a marker-based depth frame registration technique to limit the measuring area into an anatomically consistent region that helps to handle the patient movements during the treatment. We achieved a 0.97 correlation comparing to a spirometer and 0.53 mm average error considering a laser line scanning result as the ground truth. As future work, we will use this accurate measurement of external respiratory motion to generate a correlated motion model that describes the movements of internal tumors.

1. Introduction

Radiotherapy is one of the highly-discussed topics in the modern medical field. It has been widely used in cancer treatments to remove tumors without causing any damages to the neighboring healthy tissues. However, inaccurate system setups, anatomical motion and deformation and tissue delineation errors lead to inconsistencies in radiotherapy approaches. Respiratory-based anatomical motion and deformation largely cause errors in both radiotherapy planning and delivery processes in thoracic and abdominal regions [1,2]. With respiration, tumors in abdominal and thoracic regions can move as much as 35 mm [3,4,5,6]. As a consequence, inaccurate respiratory motion estimations directly effect tissue delineation errors, dose miss-calculations, exposure of healthy tissues to high doses and erroneous dose coverage for the clinical target volume [7,8,9,10,11].
Motion encompassing, respiratory gating, breath holding and forced shallow berating with abdominal compression are some of the existing conventional respiratory motion estimation methods [1]. Difficulties in handling patient movements, longer treatment time, patient training and discomfort are some of the most common drawbacks of these methods. On the other hand, real-time tumor tracking techniques have started to gain much attention due to their ability in actively estimating respiratory motion and continuous synchronization of the beam with the motion of the tumor.
Apart from radiotherapy, measurement of the respiration is an important task in pulmonary function testing, which is crucial for early detection of potentially fatal illnesses. Spirometer and pneumotachography are two of the well-known methods of pulmonary function testing. These methods need a direct contact with the patient while measuring and may interfere with the natural respiration. Furthermore, they measure only the full respiratory volume and cannot assess the regional pulmonary function in different chest wall behaviors. Hence, there is a need for a non-contact respiratory measurement technique, which can evaluate not only the complete, but also regional respiration.
In this paper, we investigate the feasibility of using a commercial RGB-D camera as a non-contact, non-invasive and whole-field respiratory motion-measuring device, which will enhance the patient comfort. These low-cost RGB-D cameras can provide real-time depth information of a target surface. We can use this depth information for respiratory motion measurement, but cannot achieve higher accuracy due to a considerable amount of noise in the raw depth data. Therefore, we proposed a technique of making an accurate respiratory motion model using principal component analysis (PCA) and then using that model for real-time respiratory motion measurement. First, we apply hole-filling and bilateral filtering to the first 100 raw depth frames and use that filtered depth data to create a PCA-based motion model. In the real-time respiratory motion-measuring stage, we project each depth frame to the motion model (principal components) and reconstruct back, removing the spatial and temporal noise and holes in the depth data. We can achieve higher motion measurement accuracy by using these reconstructed depth data, instead of raw depth data. The initial result of our proposed method is published in [12].
The results of this study—accurate measurements of external surface motion—can be used to predict the internal tumor motion, which is an important task of radiotherapy systems. Correspondence models that make a relationship between respiratory surrogate signals, such as spirometry or external surface motion, and internal tumor/organ motion have been studied in the literature [13,14,15,16]. Neural networks, principal component analysis and b-spline are a few example models that have been used for predicting the internal motion.
This paper is organized as follows. First, a comprehensive review of related works is presented in Section 2. An overview of the proposed method that describes the key steps and how to handle the problems existing in related works is given in Section 3. A detailed description of all of the materials and methods followed in the proposed method is presented in Section 4. The results of the experiments we conducted to evaluate the accuracy of the proposed method are given in Section 5. Finally, Section 6 concludes the paper by discussing the results and issues of the proposed method.

2. Related Work

The Synchrony respiratory tracking system, a subsystem of CyberKnife, is the first technology that continuously synchronizes beam delivery to the motion of the tumor [17]. The external respiratory motion is tracked using three optical fiducial markers attached to a tightly-fitting vest. Small gold markers are implanted near the target area before treatment to ensure the continuous correspondence between internal and external motion. The Calypso, the prostate motion-tracking system integrated into Varian (Varian Medical Systems, Palo Alto, CA, USA), eliminates the need for internal-external motion modeling by implanting three tiny transponders with an associated wireless tracking [18]. The BrainLAB ExacTrac positioning system uses radiopaque fiducial markers, implanted near the target isocenter, with external infrared (IR) reflecting markers [19]. Internal markers are tracked by an X-ray localization system, while an IR stereo camera tracks the external markers. The Xsight Lung Tracking system (an extension of the CyberKnife system) is a respiratory motion-tracking system of lung lesion that eliminates the need for implanted fiducial markers [20].
Another interesting respiratory motion modeling technique using 4D computed tomography (CT) images was introduced in [21], where PCA is used to reduce the motion artifacts appearing on the CT images and to synthesize the CT images in different respiratory phases. Mori et al. used cine CT images to measure the intrafractional respiratory movement of pancreatic tumors [22]. Yang et al. estimated and modeled the respiratory motion by applying an optical flow-based deformable image registration technique on 4D-CT images that were acquired in cine mode [23]. In contrast to CT, magnetic resonance imaging (MRI) provides lesser ionization and excellent soft tissue contrast that helps to achieve better characterization. Therefore, 4D and cine-MRI images have been widely used for measuring organ/tumor motion due to respiration [24,25,26,27,28]. Apart from that, researchers have been experimenting with ultrasound images for tracking organs that move with respiration [29,30].
Radiography and fluoroscopy imaging techniques such as X-ray, CT and MRI have the problems of higher cost, slow acquisition, low resolution, lower signal-to-noise ratio and especially exposure to an extra dose of radiation [2,21,31,32]. Additionally, some of these systems have the disadvantage of invasive fiducial marker implantation procedures that increase the patient preparation time and treatment time.
To avoid these problems, researchers have proposed optical methods, which mainly consist of cameras, light projectors and markers. With the advantage of non-contact measurement, optical methods have no interference with the natural respiration of the patient. Ferrigno et al. proposed a method to analyze the chest wall motion by using passive markers placed on the thorax and abdomen [33]. Motion measurement is carried out by computing the 3D coordinates of these markers with the help of specially-designed multiple cameras. In [34], the authors proposed a respiratory motion-estimation method based on coded visual markers. They also utilized a stereo camera to calculate the 3D coordinates of the markers and estimated the 3D motion of the chest wall according to the movements of the markers. Yan et al. investigated the correlation between the motion of external markers and an internal tumor target [35]. They placed four infrared reflective markers on different areas of the chest wall and used a stereo infrared camera to track the motion of the markers. Alnowami et al. employed the Codamotion infrared marker-based tracking system to acquire the chest wall motion and applied probability density estimation to predict the respiratory motion [36,37]. Some researchers have investigated respiratory motion evaluation by calculating curvature variance of the chest wall using a fiber optic sensor and fiber Bragg grating techniques [38,39]. Even though the marker-based methods provide higher data acquisition rates and accuracy, the marker attachment procedure is time consuming and results in inconveniences for the patient. Furthermore, a large number of markers is needed to achieve higher spatial resolution.
In contrast to marker-based methods, structured light techniques provide whole-field measurement with high spatial resolution. Structured light systems consist of a projector and camera and emit a light pattern onto the target surface, creating artificial correspondences. The 3D information of the target surface can be found by solving the correspondences on the captured image of the illuminated scene. Aoki et al. proposed a respiratory monitoring system using a near-infrared multiple slit-light projection [40]. Even though they were able to achieve a high correlated respiratory motion pattern to a spirometer, they could not measure the exact respiratory volume or motion due to the variable projection coverage on the chest wall, which is caused by patient movements. Chen et al. solved this problem by introducing active light markers to define the measuring boundary, offering a consistent region for volume evaluation [41]. They also used a projector to illuminate the chest wall with a structured light pattern of color stripes and a camera to capture the height-modulated images. Then, the 3D surface calculated by triangulation is used to derive the respiratory volume information. However, the long baseline and the restriction of the camera plane to be parallel to the reference frame limit the portability of this method. In [31], the authors adopted a depth sensor, which uses a near-UV structured light pattern, along with a state-of-the-art non-rigid registration algorithm to identified the 3D deformation of the chest wall and hence the tumor motion. Time of flight (ToF) is another well-known optical method that has been used by researchers for respiratory motion handling during radiotherapy [42,43,44].
With the recent advances in commercial RGB-D sensors such as the Microsoft Kinect and ASUS Xtion Pro, these have been used in a broad area of research work. Have a relatively low cost and the fact that these sensors can measure the motion without any markers or wearable devices encourage researchers to use them in respiratory motion analysis. However, the low depth resolution of these sensors, which is about 1 cm at a 2 m distance, restricts the usage mostly for evaluating respiratory functions such as respiratory rate [45,46,47,48,49,50,51], where highly accurate motion information is not needed. In the case of radiotherapy, respiratory motion induces tumor movements up to 2 cm in abdominal or thoracic regions and needs less than 1 mm accuracy in motion measurements [52]. Xia and Siochi overcome the low depth resolution of the Kinect sensor by using a translation surface, which magnifies the respiratory motion and reduces the noise of irregular surfaces [53]. A few other researchers utilized RGB-D sensors to acquire 3D surface data of the chest wall and applied PCA to capture 1D respiration curves of disjoint anatomical regions (thorax and abdomen), which is related to the principal axes [32,54]. However, the respiratory motion measurement accuracy of these methods is affected by the patient movements, as they have not provided a proper method for handling these.

3. Overview of the Proposed Method

In this study, we introduce a non-contact, non-invasive and real-time respiratory motion measurement technique using an RGB-D camera, which is small in size and more flexible for handling. Furthermore, we introduce a patient movement-handling method using four dot markers. These four markers define the measurement boundaries of the moving chest wall, providing a consistent region for respiratory motion estimation.
Using the RGB-D camera, we capture continuous depth images of the patient’s chest wall at 6.7 fps covering the whole thoracic and abdominal area. Then, we create a respiratory motion model by applying PCA to the first 100 frames, decomposing the data into a set of motion bases that corresponded to principal components (PCs). Before applying PCA, we use an edge-preserving bilateral filter and a hole-filling method to remove the noise and the holes of the first 100 frames.
According to the experimental analysis, we found out that a respiratory motion model can be accurately obtained using the first three principal components. The remaining principal components represent the noise and motion artifact existing in the input data. We start the real-time respiratory motion measurement from the 101st frame, projecting each new depth frame onto the motion model to obtain the low-dimensional representation of the data. To evaluate the motion in metric space, depth images are reconstructed using the projection coefficient. Figure 1 shows the flowchart of the proposed respiratory motion measurement process.
Using an RGB-D camera for respiratory motion measurement has many advantages. First, compared to the CT/MRI techniques, the proposed method prevents patients from being exposed to an extra dose of radiation. The RGB-D camera is a non-contact optical method and has no interference with the natural breathing of the target. Moreover, this can give real-time depth information of the target surface. Therefore, we can provide a comfortable and efficient, but lesser duration, treatment to the patients. Compared to marker-based methods, the RGB-D camera has high spatial resolution and provides depth information of the entire target surface; hence, we can measure not only the entire chest wall motion, but also the regional motions. The RGB-D camera we use in our system provides depth data in 640 × 480 resolution, and we select a 200 × 350 ROI providing 70,000 data points for motion measurement, which is much higher than marker-based methods (as an example, [36] used a 4 × 4 marker grid providing only 16 data points). The smaller size and lower price of the RGB-D cameras facilitate building a more portable and inexpensive respiratory motion measurement system compared to some other optical methods.
However, there is a known problem of low accuracy of the RGB-D cameras. Depth data acquired from low-cost RGB cameras has much noise and many holes that affect the accuracy of motion measurement. Alnowami et al. and Tahavori et al. used depth data acquired from an RGB-D camera for respiratory motion measurement, but could not achieve sub-millimeter level accuracy when it comes to experiments with real persons [55,56]. Using the PCA-based motion model, we increase the motion measurement accuracy by removing the spatial and temporal noise along with the holes in the depth data. When the filtered depth data are used as the input of the PCA-based motion model, we do not need to apply bilateral filtering or hole-filling for each depth frame during real-time motion measurement. Comparing with a laser line scanner, we prove that our method can achieve sub-millimeter accuracy in respiratory motion measurement using a low-cost RGB-D camera.

4. Materials and Methods

4.1. Data Acquisition

We use an Asus Xtion PRO RGB-D camera (consisting of an RGB camera, an infrared camera and a Class 1 laser projector that is safe under all conditions of normal use) to acquire real-time depth data and RGB images of the entire thoracic and abdominal region of the target subjects. The RGB-D camera provides both depth and RGB-D images in 640 × 480 resolution and 30 frames per second. However, due to the process of saving data to disk for later analysis, we could acquire only about 6.7 frames per second. The OpenNI library is used to grab the depth and RGB data from the camera and to convert them to matrix format for later usage. The depth camera covers not only the intended measuring area, but also the background regions. Moreover, the coverage of the chest wall is variable due to the surface motion and the patient movements. However, we should have an anatomically-consistent measuring area during the whole treatment time for delivering the radiation dose accurately.
To handle this problem, we attach four dot markers to define a measuring boundary on the chest wall covering the whole thoracic and abdominal area. Instead of using active LED markers or retroreflective markers, which can interfere with the RGB-D camera, we use small white color circles made of sticker paper.
After obtaining informed consent from all subjects following the institutional ethics, we collected respiratory motion data from ten healthy volunteers. All of the volunteers were advised to wear a skin-tight black color t-shirt and lay down in a supine position. The four markers are attached to the t-shirt, and the RGB-D camera is placed nearly 85 cm above the volunteer as shown in Figure 2. According to the specification of the RGB-D camera, it can provide depth information within an 80 cm to 350 cm range. However, [55] showed that the RGB-D camera gives the best accuracy within the 85 cm to 115 cm range. By keeping the camera closer to the volunteer, we can cover the measuring area with a higher number of pixels, which eventually provides more data points for motion analysis. Analyzing all of these facts, we place the RGB-D camera 85 cm above the patient. Along with the continuous depth frames, visual images are also captured using the built-in RGB camera nearly for a duration of one minute. The RGB images are used only for the purpose of detecting the markers to determine the measuring ROI.

4.2. Measuring Region

To define the measuring region, we detect the dot markers on the RGB image by applying few image processing techniques. Otsu’s global binary thresholding method followed by contour detection and ellipse fitting [57] are applied to identify the center coordinates of each dot marker accurately. Using the intrinsic and extrinsic parameters of the depth and RGB cameras, which are acquired by a calibration process [58,59], depth images are precisely aligned (with sub-pixel accuracy) to the visual (RGB) images. Therefore, the marker coordinates found on visual images can be directly used on depth images to define the ROI, which marks the measuring area. The position, shape and size of the ROI are not consistent throughout all of the depth frames due to the motion of the chest wall and the movement of the patient. In order to make it consistent, the selected ROI on every depth frame is mapped into a predefined size of a rectangular shape using projective transformation [60]. Figure 3 shows the steps followed for detecting the dot markers and creating the rectangular ROI. We use this rectangular ROI for further processing of our proposed method.

4.3. Respiratory Motion Modeling Using PCA

4.3.1. Depth Data Pre-Processing

We use the first 100 depth frames to create a respiratory motion model using PCA. Since we use this model for real-time respiratory motion measurement, a precise model should be created using accurate input data. Due to the slight reflection of the t-shirt and device errors, holes can appear in the same spot of the chest wall area for a few continuous depth frames as depicted in Figure 4a. Moreover, there is much noise existing in the raw depth data provided by the sensor. If we directly use these data as the input for PCA without any pre-processing, we will encounter erroneous results as in Figure 4b, where most of the data variation is concentrated in the areas of holes.
To avoid this problem, we first apply a hole-filling technique on depth images using the zero-elimination mode filter. If there are enough non-zero neighbors, this filter replaces pixels with zero depth values with the statistical mode of its non-zero neighbors. Next, we remove noise from depth images using an edge-preserving bilateral filter [61]. Figure 4c shows the PCA result when we use filtered depth data as the input.

4.3.2. Principal Component Analysis

After applying filtering to the first 100 depth frames, PCA [62] is applied to make a respiratory motion model that is integrated into the major principal components. By column-wise vectorization of the depth data ( d i ) on the selected rectangular ROI, we create an input data matrix D of dimension m × n :
D m × n = d 1 , d 2 , , d n ,
where n is the total number of depth frames ( n = 100 ) and m is number of pixels in the rectangular ROI. First, we subtract the mean vector d ¯ calculated as:
d ¯ = 1 n i = 1 n d i
from the input data matrix to create a normalized matrix D ^ :
D ^ = d 1 d ¯ , d 2 d ¯ , , d n d ¯ .
Since m n , we use Equation (4) to calculate the n × n covariance matrix C, reducing the dimensionality of the input data.
C = 1 n 1 D ^ T D ^
The transformation, which maps the high-dimensional input depth data into a low-dimensional PC subspace, is obtained by solving the eigenvalues ( λ j ) and eigenvectors ( ϕ j ) of the covariance matrix using Equation (5).
C ϕ j = λ j ϕ j
All of the eigenvectors, which correspond to principal components, are then arranged in descending order { ϕ 1 , ϕ 2 , ϕ 3 , , ϕ n } according to the magnitude of the eigenvalues ( λ 1 λ 2 λ 3 λ n ).
Using an experimental analysis, we found out that the first eigenvalue dominates the rest of the eigenvalues and accounts for over 98 % of the data variation during regular respiration. However, when the respiration is irregular, three eigenvalues are required to cover 98% of the data variation. Figure 5 depicts the first ten eigenvalues of the covariance matrix calculated from five samples on regular breathing and three samples on irregular breathing. Figure 6 shows three graphs of projection coefficients (explained in Section 4.4.1) corresponding to the first three principal components calculated for regular breathing, while Figure 7 shows examples of irregular breathing. An apparent respiratory motion pattern is visible only on the first PC for regular breathing, while the first three PCs show a respiratory pattern in irregular breathing. Following this analysis, we represent the respiratory motion model W using the first three principal components ( ϕ 1 , ϕ 2 , ϕ 3 ), reducing the dimensionality of input depth data.

4.4. Real-Time Respiratory Motion Measurement

After creating a respiratory motion model using the first 100 depth frames, we start the real-time respiratory motion measurement from the 101st frame. The data we use for respiratory motion modeling should cover a few complete respiratory cycles in order to generalize the input data. By following this rule, we can make sure that the motion model represents all of the statuses of the respiratory cycle. After observing all of the experiment datasets, we empirically select 100 as the number of depth frames for PCA-based motion modeling.

4.4.1. Projection and Reconstruction

We project each new depth frame d i ( i > 100 ) onto the motion model W = ϕ 1 ϕ 2 ϕ 3 in order to represent them using the first three principal components. The following equation is used as the projection operation, where β i represents the projection coefficients.
β i = W T ( d i d ¯ )
Even though the calculated projection coefficients represent a clear respiratory motion, we cannot use these directly for measuring the motion as these coefficients are three separate values in the principal component domain instead of the metric domain. Therefore, the following equation is used to reconstruct the depth data ( d i ^ ), which is in the metric domain, from the projection coefficient.
d i ^ d ¯ + W β i
Here, the advantage is that we do not need to apply hole-filling or denoising filters to the depth data that we use for real-time respiratory motion measurement. By reconstructing the depth images using the motion model, we can remove the spatial and temporal noise, as well as the holes in the data. Figure 8 depicts the advantage of applying bilateral filtering and hole-filling to the input depth images for PCA. Figure 8a,b shows the PCA results with and without using filtering on PCA input data, respectively. As shown in Figure 8c,d, if we use the erroneous PC for projection and reconstruction, many holes and much noise will appear on the reconstructed depth data even if there are no holes in the input data. In contrast to that, if we use an accurate PC for projection and reconstruction, we can remove the holes and noise appearing in the input depth data by reconstructing it as shown in Figure 8e,f.

4.4.2. Motion Measurement

We use these reconstructed depth data for respiratory motion measurements. The rectangular ROI of the reconstructed depth data is further divided into smaller regions as in Figure 9a to separately measure the motion in smaller regions. Average depth values of these smaller regions along with 2D image coordinates and intrinsic camera parameters are used to calculate the 3D (X, Y and Z) coordinates of the mid-points. Then, we use these 3D coordinates to construct a surface mesh model composed of small triangles as in Figure 9b,c, which can be used to represent the chest wall surface and its motion clearly. We define the initial frame (101st frame) as the reference frame and calculate the motion of the remaining frames using the depth difference between the current frame and the reference frame.

4.5. Evaluation of the Accuracy

We propose an experimental setup as shown in Figure 10 for evaluating the accuracy of the proposed method. First, our proposed method is compared with a spirometer, which measures the air flow volume using a mouthpiece device, and then with a laser line scanner, which provides very accurate 3D reconstruction results.

4.5.1. Comparison with Spirometer

We compared the respiratory motion pattern generated using the proposed method with a spirometer, which has been used for evaluating the accuracy of RGB-D camera-based respiratory function evaluation methods [41,63,64]. During this experiment, the patient breathed through a calibrated spirometer (SpiroUSB™, CareFusion) to record the airflow volume while the depth camera captured the chest wall motion simultaneously (see Figure 10a,b). The spirometer provides the air flow volume in liters, not the respiratory motion in millimeters. Therefore, with the help of surface mesh data, we developed a method to measure the volume difference of the current frame compared to a reference frame. We found the volume difference by calculating the sum of the volume of small prisms created by the triangles in the surface mesh of the current frame and their projection on the reference plane as the top and bottom surfaces.
First, these prisms were further divided into three irregular tetrahedrons. Then, the volume of a tetrahedron was calculated using Equation (8), where a ( a x , a y , a z ) , b ( b x , b y , b z ) , c ( c x , c y , c z ) and d ( d x , d y , d z ) represent the 3D coordinates of the four vertices.
V = d e t ( A ) 6 , A = a x b x c x d x a y b y c y d y a z b z c z d z 1 1 1 1

4.5.2. Comparison with Laser Line Scanning

Laser line scanning, which is well known for providing high accuracy (<0.1 mm) [65], is a 3D reconstruction method consisting of a laser line projector and a camera. We used this method to reconstruct a specific position of the chest wall accurately and to compare it with the PCA reconstruction results. The setup for this experiment consists of a laser line projector and the RGB-D camera as shown in Figure 10c. We projected the laser line onto the abdominal area of the target chest wall and captured the illuminated scene using the visual (RGB) camera of the RGB-D sensor. We prepared 15 datasets (D01, D02, ..., D15) from ten healthy volunteers ranging in age from 24 to 32 who participated in the data capturing process. Volunteer information is given in Table 1.
First, we calibrated the laser line projector and the RGB camera to find the 3D plane equation of the laser line with respect to the camera coordinate system using a checkerboard pattern [65,66]. Then, we separated the measuring area from the rest of the image by defining a rectangular ROI on the RGB images the same as on the depth images. We took the red channel of the RGB image, applied Gaussian smoothing and fit a parabola to each column of the ROI image according to the pixel intensities. Then, by finding the maximum of the parabola, which corresponds to the laser line location, we can identify the 2D image coordinates of it with sub-pixel level accuracy. We projected these image coordinates to the 3D laser plane using the intrinsic camera parameters and calculated the 3D coordinates by finding the ray-plane intersection points. These 3D coordinates are referred to as laser reconstruction in the remainder of this paper. Next, we projected the 2D coordinates of the laser line onto the reconstructed depth image d i ^ to identify the 3D coordinates of the laser line according to the proposed PCA-based method and referred to this as PCA reconstruction.
The purpose of the proposed method is not to reconstruct the chest wall surface, but to measure the chest wall motion accurately. Therefore, instead of comparing the direct 3D reconstruction results, we compared the respiratory motion; defined as the depth difference between the current frame and reference frame. We chose the 101st frame as the reference frame, as it is the starting frame of real-time respiratory motion measurement. To have a quantitative comparison, we selected five points ( P 1 , P 2 , , P 5 ) across the laser line and found the motion error of each point separately for 100 frames. By taking the laser line reconstruction as the ground truth, we calculated the motion error E i j of the j-th point on the laser line of i-th frame ( 1 j 5 and 1 i 100 ) using:
E i j = ( D i j L D r j L ) ( D i j P D r j P ) ,
where D i j is the depth value of the j-th point on the laser line of the i-th frame. L and P represent the laser reconstruction and PCA reconstruction, respectively, while r represents the reference frame.

5. Results

First, we present the accuracy evaluation results of the proposed respiratory motion measurement method compared to the spirometer and laser line scanner. With the use of the spirometer, we examined the respiratory pattern using volume changes. The laser line scanner was used to analyze the motion measurement accuracy of the proposed method. Later, we compared our method with bilateral filtering and then conducted isovolume maneuver to show the advantages of the proposed method over existing ones. Finally, we analyzed how the proposed method works in a condition of longer and irregular breathing. All of these experiments were performed in a general laboratory environment, and the software components were implemented using C++ language with the help of OpenCV and OpenNI libraries.

5.1. Comparison of Respiratory Pattern with Spirometer

Figure 11 depicts the volume comparison graphs of the spirometer and the proposed PCA-based method. The sample rate of the spirometer is lower than the RGB-D camera. Therefore, we applied b-spline interpolation on available spirometer data to generate a smooth motion curve to achieve a similar frame interval as the RGB-D camera.
The magnitude of the respiratory volume is different between the spirometer and the proposed method, as the measuring area and methodology are different. Therefore, we compared the data by normalizing it to a −1:1 range. As shown in Figure 11, the proposed method could generate respiratory motion patterns very similar to the spirometer with a 0.97 average correlation.

5.2. Accuracy Analysis Using Laser Line Scanning

Table 2 gives the motion error results of the five points on the laser line, calculated from 15 datasets. We summarized the data on the table as the average, maximum and standard deviation of the motion error ( E i j ) over 100 frames. The average motion error of all datasets on all five points is 0 . 53 ± 0 . 05 mm. As a qualitative comparison, motion graphs of four datasets calculated on four different points of the laser line are depicted in Figure 12. As a further analysis, we calculated the normalized cross-correlation (NCC) between the PCA motion ( D i x P D r x P ) and laser line motion ( D i x L D r x L ) for each x coordinate of the laser line over 100 frames. The graph in Figure 13 shows the NCC results, which was separately calculated for each X-coordinate of the laser line for all 15 datasets. The results indicate a very high correlation between the two motion estimation methods as the average NCC for all of the datasets is 0 . 98 ± 0 . 0009 .

5.3. Comparison with Bilateral Filtering

To show the advantages, we compared our proposed method with bilateral filtering. In our method, hole-filling and bilateral filtering are applied only to the first 100 frames that we used as the input for PCA, and we do not use this during real-time respiratory motion measurements. During this experiment, we measured the respiratory motion by applying bilateral filtering and hole-filling to all frames and without using PCA, and the results are compared with the proposed PCA-based method. Figure 14a shows a part of the motion comparison graph, where the bilateral filtering gives a rough curve with more temporal noise, while the proposed method gives a smoother curve with less temporal noise. The reason is that PCA provides both spatial and temporal filtering, not like bilateral filtering, which provides only spatial filtering.
Furthermore, Figure 14b compares the proposed method and bilateral filtering with a very accurate 3D reconstruction method of laser line scanning (details are given in Section 4.5.2). Considering the laser reconstruction as the ground truth, we calculated the motion error (Equation (9)) of the proposed method and bilateral filtering on a selected location of the chest wall. In the case of the motion comparison provided in Figure 14b, the average error is 0 . 35 ± 0 . 06 mm for the proposed method and 0 . 85 ± 0 . 08 mm for the bilateral filtering.

5.4. Isovolume Maneuver

We conducted an isovolume maneuver to emphasize the capability of the regional respiratory motion measurement of the proposed method. During the test, the subjects are advised to hold their breath without air flow, but exchanging the internal volume between thorax and abdomen. Then, we measured the motion of whole chest wall (which is covered by the four dot markers) and the regional motion of thorax and abdomen separately, presented in Figure 15. We used a few additional markers to separate the thorax and abdomen area on the chest wall. Theoretically, there should be no volume changes for the whole chest wall, but as we measure the depth difference in an ROI defined by the markers, which does not cover the entire chest wall area exactly, a motion pattern appears on the whole chest wall. However, opposite phases of the whole thorax and the whole abdomen motion with 0 . 99 cross-correlation reflecting the volume exchange between them, which we cannot determine using a respiratory volume-measuring devices such as the spirometer.

5.5. Handling Irregular Breathing

We analyze how the motion model generated using the first 100 frames affects the accuracy during longer and irregular breathing. For regular respiration that does not have much variation in respiratory rate and volume, only the first principal component is enough to accurately measure the motion. Figure 16 shows two graphs of regular respiratory motion that were calculated over 350 frames compared with the laser line scanning (details are given in Section 4.5.2). Even though we use only the first principal component calculated over 100 depth frames, the average error is about 0.3 mm and 0.8 mm for the two graphs, respectively.
However, during irregular breathing (respiratory rate and amplitude change time to time), accuracy gets lower when we are using only the first principal component as the motion model. As shown in Figure 17, the large difference compared to the laser line scanning proves that only the first principal component is not enough for handling irregular respiratory motions. Therefore, we redo the accuracy analysis including the first three principal components of the motion model and draw the results on the same graph. Using the first three principal components, we could achieve sub-millimeter accuracy (∼0.5 mm) even if the respiratory pattern of the first 100 frames is entirely different from rest of the data.
As a further refinement step for a very long treatment duration, we can update the motion model by recalculating the principal components with a new set of depth data at regular intervals.

6. Discussion and Conclusions

We have proposed a patient-specific external respiratory motion analyzing technique based on PCA. A commercial RGB-D camera was used to acquire the depth data of the target respiratory motion, and PCA was applied to find a motion model corresponding to the respiration. Four dot markers attached to the chest wall were used to define an anatomically-consistent measuring region throughout the measuring period. Using an experimental analysis, we found out that only the first three principal components are sufficient to represent the respiratory motion while the rest of the principal components represent patterns of small perturbations. Therefore, all of the depth data were projected onto the first three principal component and reconstructed removing the spatial and temporal noise existing in the input data.
For the convenience of the volunteers who participated in the laboratory-level experiments, we allowed them to wear a black-colored t-shirt and attached white color dot markers on it. Even though we use a tight-fitting t-shirt, a few wrinkles can appear within the chest wall area and affect the accuracy of the results. Therefore, we recommend not using any clothing that covers the measuring region during the clinical treatment process. We can select dot markers with an apparent color difference with the patient’s skin color and directly attach them to the patient’s body. Furthermore, it is advisable to attach the dot markers on four locations of the chest wall where there is no compelling motion due to the respiration, such as the end of the collar bones and hip bones.
During respiratory motion modeling using PCA, we used the first 100 depth frames as the input data. The criterion for selecting this number is that input depth data should cover a few complete respiratory cycles. All of our experiment datasets satisfy this criterion within 100 frames. The frame rate during the experiments was about 6.7 fps on average because it takes time for writing/reading data to hard disk frame by frame. However, during real respiratory motion measurement sessions, reading and/or writing data to a hard disk is not necessary; thus, we can achieve a frame rate of around 20 fps. The frame rate was very stable during the experiments with only a 0.4 fps standard deviation.
The accuracy of the proposed method was first evaluated using a spirometer, which has an accuracy level of 3%. Even though the magnitude of the measured volume was different, the spirometer and the proposed method were highly correlated in motion pattern (0.97 average correlation). Second, a laser line scanning technique, which is well known for high accuracy, was used to analyze the motion measurement accuracy of the proposed method. A laser line that was projected onto the abdominal area of the subject was reconstructed using a laser line scanning technique and compared with the proposed PCA reconstruction method. The motion of the projected laser line is measured using the both reconstruction results with respect to a reference frame. We could achieve high correlation (0.98 NCC) between the laser line scanner and the proposed method. Considering the laser scanning results as the ground truth, the measured average motion error of the proposed method is 0.53 mm, which is very comparable to commercial respiratory tracking systems according to Table 3.
The proposed method provides not only a high accuracy, but also a very simple system setup, which is very flexible and portable. With the advantage of non-contact measurement, the proposed method has no interference with the patient’s respiration and, hence, provides more accurate measurements. Furthermore, the proposed method has the advantage of measuring the motion in a particular location of the chest wall, instead of measuring the motion of the whole chest wall at once.
Finding a motion model that can be used to correlate the external respiratory motion with internal tumor motion has been discussed in the literature [14,15,16]. Linear, polynomial, b-spline and PCA-based models are a few techniques that have been investigated so far. As future work, we are also planning to work on finding a correlation model, that can be employed to measure internal tumor motion, by using external surface motion as the surrogate input data. Furthermore, we are planning to test the proposed system in a real clinical environment using patients with different demographic and clinical properties.

Supplementary Materials

The Supplementary Materials are available online at https://www.mdpi.com/1424-8220/17/8/1840/s1.

Acknowledgments

This work was supported partly by ’The Cross-Ministry Giga KOREA Project’ grant funded by the Korea Government (MSIT) (No.GK17P0300, Real-time 4D reconstruction of dynamic objects for ultra-realistic service), and partly by the Convergence R&D Development Project of the Small and Medium Administration, Republic of Korea (S2392741).

Author Contributions

U.W. developed the software components, conducted the experiments, analyzed the results and drafted the manuscript. S.P. supervised the study and critically revised and finalized the intellectual content of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Keall, P.J.; Mageras, G.S.; Balter, J.M.; Emery, R.S.; Forster, K.M.; Jiang, S.B.; Kapatoes, J.M.; Low, D.A.; Murphy, M.J.; Murray, B.R.; et al. The management of respiratory motion in radiation oncology report of AAPM Task Group 76. Med. Phys. 2006, 33, 3874–3900. [Google Scholar] [CrossRef] [PubMed]
  2. Ozhasoglu, C.; Murphy, M.J. Issues in respiratory motion compensation during external-beam radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2002, 52, 1389–1399. [Google Scholar] [CrossRef]
  3. Hanley, J.; Debois, M.M.; Mah, D.; Mageras, G.S.; Raben, A.; Rosenzweig, K.; Mychalczak, B.; Schwartz, L.H.; Gloeggler, P.J.; Lutz, W.; et al. Deep inspiration breath-hold technique for lung tumors: The potential value of target immobilization and reduced lung density in dose escalation. Int. J. Radiat. Oncol. 1999, 45, 603–611. [Google Scholar] [CrossRef]
  4. Barnes, E.A.; Murray, B.R.; Robinson, D.M.; Underwood, L.J.; Hanson, J.; Roa, W.H.Y. Dosimetric evaluation of lung tumor immobilization using breath hold at deep inspiration. Int. J. Radiat. Oncol. Biol. Phys. 2001, 50, 1091–1098. [Google Scholar] [CrossRef]
  5. Davies, S.C.; Hill, A.L.; Holmes, R.B.; Halliwell, M.; Jackson, P.C. Ultrasound quantitation of respiratory organ motion in the upper abdomen. Br. J. Radiol. 1994, 67, 1096–1102. [Google Scholar] [CrossRef] [PubMed]
  6. Ross, C.S.; Hussey, D.H.; Pennington, E.C.; Stanford, W.; Fred Doornbos, J. Analysis of movement of intrathoracic neoplasms using ultrafast computerized tomography. Int. J. Radiat. Oncol. Biol. Phys. 1990, 18, 671–677. [Google Scholar] [CrossRef]
  7. Langen, K.M.; Jones, D.T.L. Organ motion and its management. Int. J. Radiat. Oncol. Biol. Phys. 2001, 50, 265–278. [Google Scholar] [CrossRef]
  8. Engelsman, M.; Damen, E.M.F.; De Jaeger, K.; Van Ingen, K.M.; Mijnheer, B.J. The effect of breathing and set-up errors on the cumulative dose to a lung tumor. Radiother. Oncol. 2001, 60, 95–105. [Google Scholar] [CrossRef]
  9. Malone, S.; Crook, J.M.; Kendal, W.S.; Zanto, J.S. Respiratory-induced prostate motion: Quantification and characterization. Int. J. Radiat. Oncol. Biol. Phys. 2000, 48, 105–109. [Google Scholar] [CrossRef]
  10. Lujan, A.E.; Larsen, E.W.; Balter, J.M.; Ten Haken, R.K. A method for incorporating organ motion due to breathing into 3D dose calculations. Med. Phys. 1999, 26, 715–720. [Google Scholar] [CrossRef] [PubMed]
  11. Jacobs, I.; Vanregemorter, J.; Scalliet, P. Influence of respiration on calculation and delivery of the prescribed dose in external radiotherapy. Radiother. Oncol. 1996, 39, 123–128. [Google Scholar] [CrossRef]
  12. Wijenayake, U.; Park, S.Y. PCA based analysis of external respiratory motion using an RGB-D camera. In Proceedings of the IEEE International Symposium on Medical Measurements and Applications (MeMeA), Benevento, Italy, 15–18 May 2016; pp. 1–6. [Google Scholar]
  13. Bukovsky, I.; Homma, N.; Ichiji, K.; Cejnek, M.; Slama, M.; Benes, P.M.; Bila, J. A fast neural network approach to predict lung tumor motion during respiration for radiation therapy applications. BioMed Res. Int. 2015, 2015, 489679. [Google Scholar] [CrossRef] [PubMed]
  14. McClelland, J.; Hawkes, D.; Schaeffter, T.; King, A. Respiratory motion models: A review. Med. Image Anal. 2013, 17, 19–42. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. McClelland, J. Estimating Internal Respiratory Motion from Respiratory Surrogate Signals Using Correspondence Models. In 4D Modeling and Estimation of Respiratory Motion for Radiation Therapy; Ehrhardt, J., Lorenz, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 187–213. [Google Scholar]
  16. Fayad, H.; Pan, T.; Clément, J.F.; Visvikis, D. Technical note: Correlation of respiratory motion between external patient surface and internal anatomical landmarks. Med. Phys. 2011, 38, 3157–3164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Seppenwoolde, Y.; Berbeco, R.I.; Nishioka, S.; Shirato, H.; Heijmen, B. Accuracy of tumor motion compensation algorithm from a robotic respiratory tracking system: A simulation study. Med. Phys. 2007, 34, 2774–2784. [Google Scholar] [CrossRef] [PubMed]
  18. Willoughby, T.R.; Kupelian, P.A.; Pouliot, J.; Shinohara, K.; Aubin, M.; Roach, M.; Skrumeda, L.L.; Balter, J.M.; Litzenberg, D.W.; Hadley, S.W.; et al. Target localization and real-time tracking using the Calypso 4D localization system in patients with localized prostate cancer. Int. J. Radiat. Oncol. Biol. Phys. 2006, 65, 528–534. [Google Scholar] [CrossRef] [PubMed]
  19. Jin, J.Y.; Yin, F.F.; Tenn, S.E.; Medin, P.M.; Solberg, T.D. Use of the BrainLAB ExacTrac X-Ray 6D System in Image-Guided Radiotherapy. Med. Dosim. 2008, 33, 124–134. [Google Scholar] [CrossRef] [PubMed]
  20. Fu, D.; Kahn, R.; Wang, B.; Wang, H.; Mu, Z.; Park, J.; Kuduvalli, G.; Maurer, C.R., Jr. Xsight lung tracking system: A fiducial-less method for respiratory motion tracking. In Treating Tumors that Move with Respiration; Springer: Berlin/Heidelberg, Germany, 2007; pp. 265–282. [Google Scholar]
  21. Zhang, Y.; Yang, J.; Zhang, L.; Court, L.E.; Balter, P.A.; Dong, L. Modeling respiratory motion for reducing motion artifacts in 4D CT images. Med. Phys. 2013, 40, 041716. [Google Scholar] [CrossRef] [PubMed]
  22. Mori, S.; Hara, R.; Yanagi, T.; Sharp, G.C.; Kumagai, M.; Asakura, H.; Kishimoto, R.; Yamada, S.; Kandatsu, S.; Kamada, T. Four-dimensional measurement of intrafractional respiratory motion of pancreatic tumors using a 256 multi-slice CT scanner. Radiother. Oncol. 2009, 92, 231–237. [Google Scholar] [CrossRef] [PubMed]
  23. Yang, D.; Lu, W.; Low, D.A.; Deasy, J.O.; Hope, A.J.; El Naqa, I. 4D-CT motion estimation using deformable image registration and 5D respiratory motion modeling. Med. Phys. 2008, 35, 4577–4590. [Google Scholar] [CrossRef] [PubMed]
  24. Yun, J.; Yip, E.; Wachowicz, K.; Rathee, S.; Mackenzie, M.; Robinson, D.; Fallone, B.G. Evaluation of a lung tumor autocontouring algorithm for intrafractional tumor tracking using low-field MRI: A phantom study. Med. Phys. 2012, 39, 1481–1494. [Google Scholar] [CrossRef] [PubMed]
  25. Crijns, S.P.M.; Raaymakers, B.W.; Lagendijk, J.J.W. Proof of concept of MRI-guided tracked radiation delivery: Tracking one-dimensional motion. Phys. Med. Biol. 2012, 57, 7863. [Google Scholar] [CrossRef] [PubMed]
  26. Cerviño, L.I.; Du, J.; Jiang, S.B. MRI-guided tumor tracking in lung cancer radiotherapy. Phys. Med. Biol. 2011, 56, 3773. [Google Scholar] [CrossRef] [PubMed]
  27. Cai, J.; Chang, Z.; Wang, Z.; Paul Segars, W.; Yin, F.F. Four-dimensional magnetic resonance imaging (4D-MRI) using image-based respiratory surrogate: A feasibility study. Med. Phys. 2011, 38, 6384–6394. [Google Scholar] [CrossRef] [PubMed]
  28. Siebenthal, M.V.; Székely, G.; Gamper, U.; Boesiger, P.; Lomax, A.; Cattin, P. 4D MR imaging of respiratory organ motion and its variability. Phys. Med. Biol. 2007, 52, 1547. [Google Scholar] [CrossRef] [PubMed]
  29. Hwang, Y.; Kim, J.B.; Kim, Y.S.; Bang, W.C.; Kim, J.D.K.; Kim, C. Ultrasound image-based respiratory motion tracking. SPIE Med. Imaging 2012, 83200N. [Google Scholar] [CrossRef]
  30. Nadeau, C.; Krupa, A.; Gangloff, J. Automatic Tracking of an Organ Section with an Ultrasound Probe: Compensation of Respiratory Motion. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 57–64. [Google Scholar]
  31. Nutti, B.; Kronander, A.; Nilsing, M.; Maad, K.; Svensson, C.; Li, H. Depth Sensor-Based Realtime Tumor Tracking for Accurate Radiation Therapy; Eurographics 2014—Short Papers; Galin, E., Wand, M., Eds.; The Eurographics Association: Strasbourg, France, 2014; pp. 10–13. [Google Scholar]
  32. Ahavori, F.; Alnowami, M.; Wells, K. Marker-less respiratory motion modeling using the Microsoft Kinect forWindows. Proceedings of Medical Imaging 2014: Image—Guided Procedures, Robotic Interventions, and Modeling, San Diego, CA, USA, 15–20 February 2014. [Google Scholar]
  33. Ferrigno, G.; Carnevali, P.; Aliverti, A.; Molteni, F.; Beulcke, G.; Pedotti, A. Three-dimensional optical analysis of chest wall motion. J. Appl. Physiol. 1994, 77, 1224–1231. [Google Scholar] [PubMed]
  34. Wijenayake, U.; Park, S.Y. Respiratory motion estimation using visual coded markers for radiotherapy. In Proceedings of the 29th Annual ACM Symposium on Applied Computing Association for Computing Machinery (ACM), Gyeongju, Korea, 24–28 March 2014; pp. 1751–1752. [Google Scholar]
  35. Yan, H.; Zhu, G.; Yang, J.; Lu, M.; Ajlouni, M.; Kim, J.H.; Yin, F.F. The Investigation on the Location Effect of External Markers in Respiratory Gated Radiotherapy. J. Appl. Clin. Med. Phys. 2008, 9, 2758. [Google Scholar] [CrossRef] [PubMed]
  36. Alnowami, M.R.; Lewis, E.; Wells, K.; Guy, M. Respiratory motion modelling and prediction using probability density estimation. In Proceedings of the IEEE Nuclear Science Symposuim and Medical Imaging Conference, Knoxville, TN, USA, 30 October–6 November 2010; pp. 2465–2469. [Google Scholar]
  37. Alnowami, M.; Lewis, E.; Wells, K.; Guy, M. Inter- and intra-subject variation of abdominal vs. thoracic respiratory motion using kernel density estimation. In Proceedings of the IEEE Nuclear Science Symposuim and Medical Imaging Conference, Knoxville, TN, USA, 30 October–6 November 2010; pp. 2921–2924. [Google Scholar]
  38. Babchenko, A.; Khanokh, B.; Shomer, Y.; Nitzan, M. Fiber Optic Sensor for the Measurement of Respiratory Chest Circumference Changes. J. Biomed. Opt. 1999, 4, 224–229. [Google Scholar] [CrossRef] [PubMed]
  39. Allsop, T.; Bhamber, R.; Lloyd, G.; Miller, M.R.; Dixon, A.; Webb, D.; Castañón, J.D.A.; Bennion, I. Respiratory function monitoring using a real-time three-dimensional fiber-optic shaping sensing scheme based upon fiber Bragg gratings. J. Biomed. Opt. 2012, 17, 117001. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Aoki, H.; Koshiji, K.; Nakamura, H.; Takemura, Y.; Nakajima, M. Study on respiration monitoring method using near-infrared multiple slit-lights projection. In Proceedings of the IEEE International Symposium on Micro-NanoMechatronics and Human Science, Nagoya, Japan, 7–9 November 2005; pp. 273–278. [Google Scholar]
  41. Chen, H.; Cheng, Y.; Liu, D.; Zhang, X.; Zhang, J.; Que, C.; Wang, G.; Fang, J. Color structured light system of chest wall motion measurement for respiratory volume evaluation. J. Biomed. Opt. 2010, 15, 026013. [Google Scholar] [CrossRef] [PubMed]
  42. Müller, K.; Schaller, C.; Penne, J.; Hornegger, J. Surface-Based Respiratory Motion Classification and Verification. In Bildverarbeitung für die Medizin 2009; Meinzer, H.P., Deserno, T.M., Handels, H., Tolxdorff, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 257–261. [Google Scholar]
  43. Schaller, C.; Penne, J.; Hornegger, J. Time-of-flight sensor for respiratory motion gating. Med. Phys. 2008, 35, 3090–3093. [Google Scholar] [CrossRef] [PubMed]
  44. Placht, S.; Stancanello, J.; Schaller, C.; Balda, M.; Angelopoulou, E. Fast time-of-flight camera based surface registration for radiotherapy patient positioning. Med. Phys. 2012, 39, 4–17. [Google Scholar] [CrossRef] [PubMed]
  45. Burba, N.; Bolas, M.; Krum, D.M.; Suma, E.A. Unobtrusive measurement of subtle nonverbal behaviors with the Microsoft Kinect. In Proceedings of the 2012 IEEE Virtual Reality Workshops (VRW), Costa Mesa, CA, USA, 4–8 March 2012; pp. 1–4. [Google Scholar]
  46. Martinez, M.; Stiefelhagen, R. Breath rate monitoring during sleep using near-IR imagery and PCA. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 3472–3475. [Google Scholar]
  47. Yu, M.C.; Liou, J.L.; Kuo, S.W.; Lee, M.S.; Hung, Y.P. Noncontact respiratory measurement of volume change using depth camera. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 2371–2374. [Google Scholar]
  48. Benetazzo, F.; Longhi, S.; Monteriù, A.; Freddi, A. Respiratory rate detection algorithm based on RGB-D camera: Theoretical background and experimental results. Healthc. Technol. Lett. 2014, 1, 81–86. [Google Scholar] [CrossRef] [PubMed]
  49. Bernal, E.A.; Mestha, L.K.; Shilla, E. Non contact monitoring of respiratory function via depth sensing. In Proceedings of the IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), Valencia, Spain, 1–4 June 2014; pp. 101–104. [Google Scholar]
  50. Al-Naji, A.; Gibson, K.; Lee, S.H.; Chahl, J. Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study. Sensors 2017, 17, 286. [Google Scholar] [CrossRef] [PubMed]
  51. Procházka, A.; Schätz, M.; Vyšata, O.; Vališ, M. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis. Sensors 2016, 16, 996. [Google Scholar] [CrossRef] [PubMed]
  52. Seppenwoolde, Y.; Shirato, H.; Kitamura, K.; Shimizu, S.; van Herk, M.; Lebesque, J.V.; Miyasaka, K. Precise and real-time measurement of 3D tumor motion in lung due to breathing and heartbeat, measured during radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2002, 53, 822–834. [Google Scholar] [CrossRef]
  53. Xia, J.; Siochi, R.A. A real-time respiratory motion monitoring system using KINECT: Proof of concept. Med. Phys. 2012, 39, 2682–2685. [Google Scholar] [CrossRef] [PubMed]
  54. Wasza, J.; Bauer, S.; Haase, S.; Hornegger, J. Sparse Principal Axes Statistical Surface Deformation Models for Respiration Analysis and Classification. In Bildverarbeitung für die Medizin 2012; Tolxdorff, T., Deserno, M.T., Handels, H., Meinzer, H.P., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 316–321. [Google Scholar]
  55. Alnowami, M.; Alnwaimi, B.; Tahavori, F.; Copland, M.; Wells, K. A quantitative assessment of using the Kinect for Xbox360 for respiratory surface motion tracking. In Proceedings of the SPIE Medical Imaging. International Society for Optics and Photonics, San Diego, CA, USA, 4 February 2012; p. 83161T-83161T-10. [Google Scholar]
  56. Tahavori, F.; Adams, E.; Dabbs, M.; Aldridge, L.; Liversidge, N.; Donovan, E.; Jordan, T.; Evans, P.; Wells, K. Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology. In Proceedings of the SPIE Medical Imaging. International Society for Optics and Photonics, Orlando, Florida, USA, 21 February 2015; p. 94152I-94152I-7. [Google Scholar] [CrossRef]
  57. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Pearson: New York, NY, USA, 2007. [Google Scholar]
  58. Gui, P.; Ye, Q.; Chen, H.; Zhang, T.; Yang, C. Accurately calibrate kinect sensor using indoor control field. In Proceedings of the 2014 Third International Workshop on Earth Observation and Remote Sensing Applications (EORSA), Changsha, China, 11–14 June 2014; pp. 9–13. [Google Scholar]
  59. Daniel, H.C.; Kannala, J.; Heikkilä, J. Joint Depth and Color Camera Calibration with Distortion Correction. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2058–2064. [Google Scholar]
  60. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  61. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Bombay, India, 4–7 January 1998; pp. 839–846. [Google Scholar]
  62. Jolliffe, I. Principal Component Analysis; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  63. Harte, J.M.; Golby, C.K.; Acosta, J.; Nash, E.F.; Kiraci, E.; Williams, M.A.; Arvanitis, T.N.; Naidu, B. Chest wall motion analysis in healthy volunteers and adults with cystic fibrosis using a novel Kinect-based motion tracking system. Med. Biol. Eng. Comput. 2016, 54, 1631–1640. [Google Scholar] [CrossRef] [PubMed]
  64. Sharp, C.; Soleimani, V.; Hannuna, S.; Camplani, M.; Damen, D.; Viner, J.; Mirmehdi, M.; Dodd, J.W. Toward Respiratory Assessment Using Depth Measurements from a Time-of-Flight Sensor. Front. Physiol. 2017, 8. [Google Scholar] [CrossRef] [PubMed]
  65. Zhou, F.; Zhang, G. Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations. Imag. Vis. Comput. 2005, 23, 59–67. [Google Scholar] [CrossRef]
  66. Dang, Q.; Chee, Y.; Pham, D.; Suh, Y. A Virtual Blind Cane Using a Line Laser-Based Vision System and an Inertial Measurement Unit. Sensors 2016, 16, 95. [Google Scholar] [CrossRef] [PubMed]
  67. Matney, J.E.; Parker, B.C.; Neck, D.W.; Henkelmann, G.; Rosen, I.I. Target localization accuracy in a respiratory phantom using BrainLAB ExacTrac and 4DCT imaging. J. Appl. Clin. Med. Phys. 2011, 12, 3296. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the proposed PCA-based respiratory motion-analyzing system. The first 100 depth frames are used to generate a PCA-based respiratory motion model. Then, that model (principal components) is used for real-time respiratory motion measurement starting from the 101st frame.
Figure 1. Flowchart of the proposed PCA-based respiratory motion-analyzing system. The first 100 depth frames are used to generate a PCA-based respiratory motion model. Then, that model (principal components) is used for real-time respiratory motion measurement starting from the 101st frame.
Sensors 17 01840 g001
Figure 2. Experimental setup where the patient is laying down in the supine position wearing a skin-tight t-shirt with four white color dot markers. The RGB-D camera is placed nearly 85 cm above the patient.
Figure 2. Experimental setup where the patient is laying down in the supine position wearing a skin-tight t-shirt with four white color dot markers. The RGB-D camera is placed nearly 85 cm above the patient.
Sensors 17 01840 g002
Figure 3. The process of rectangular ROI generation. (a) Captured visual image; (b) after binarization using Otsu’s method; (c) defining the measuring area after finding the center coordinates of the four markers; (d) identified measuring area projected onto the aligned depth image; (e) generated rectangular ROI using perspective transformation.
Figure 3. The process of rectangular ROI generation. (a) Captured visual image; (b) after binarization using Otsu’s method; (c) defining the measuring area after finding the center coordinates of the four markers; (d) identified measuring area projected onto the aligned depth image; (e) generated rectangular ROI using perspective transformation.
Sensors 17 01840 g003
Figure 4. (a) Two example depth frames where holes appear in the chest wall region; (b) erroneous PCA result (eigenvector) where large data variations appear near the hole regions; (c) PCA result after applying hole-filling and bilateral filtering to input depth data.
Figure 4. (a) Two example depth frames where holes appear in the chest wall region; (b) erroneous PCA result (eigenvector) where large data variations appear near the hole regions; (c) PCA result after applying hole-filling and bilateral filtering to input depth data.
Sensors 17 01840 g004
Figure 5. (a) Comparison of the first ten principal components using five sets of input data taken during regular breathing and (b) three sets of input data taken during irregular breathing. The first principal component is dominant over others and represents over 98% of data variance for regular breathing, while three principal components are needed to cover 98% of data variance for irregular breathing.
Figure 5. (a) Comparison of the first ten principal components using five sets of input data taken during regular breathing and (b) three sets of input data taken during irregular breathing. The first principal component is dominant over others and represents over 98% of data variance for regular breathing, while three principal components are needed to cover 98% of data variance for irregular breathing.
Sensors 17 01840 g005
Figure 6. Projection results of 100 depth frames onto the first three PCs. Only the first PC shows a clear respiratory motion pattern for three datasets (a,b,c) taken during regular breathing.
Figure 6. Projection results of 100 depth frames onto the first three PCs. Only the first PC shows a clear respiratory motion pattern for three datasets (a,b,c) taken during regular breathing.
Sensors 17 01840 g006
Figure 7. Projection results of 300 depth frames on the first three PCs for irregular breathing. The first two principal components show an apparent respiratory pattern, while the third one also shows a smaller respiratory signal. Graphs (a,b) represent two datasets.
Figure 7. Projection results of 300 depth frames on the first three PCs for irregular breathing. The first two principal components show an apparent respiratory pattern, while the third one also shows a smaller respiratory signal. Graphs (a,b) represent two datasets.
Sensors 17 01840 g007
Figure 8. (a) PCA result (first eigenvector) using bilateral filtering and hole-filling; (b) PCA result (erroneous) without using bilateral filtering and hole-filling; (c) example input depth image without any holes; (d) reconstruction results of (c) using the incorrect PCA results shown in (b); (e) example input depth image with few holes; (f) reconstruction results of (e) using the PCA results shown in (a).
Figure 8. (a) PCA result (first eigenvector) using bilateral filtering and hole-filling; (b) PCA result (erroneous) without using bilateral filtering and hole-filling; (c) example input depth image without any holes; (d) reconstruction results of (c) using the incorrect PCA results shown in (b); (e) example input depth image with few holes; (f) reconstruction results of (e) using the PCA results shown in (a).
Sensors 17 01840 g008
Figure 9. The surface mesh generation process. (a) The rectangular ROI of the reconstructed depth is further divided into smaller square ROIs; (b) a surface mesh is generated by finding the 3D coordinate of the midpoints of smaller ROIs using the average depth value of the region; (c) a selected frame of a video sequence, which shows the motion of the chest wall in a 3D viewer using a mesh model. Green dots represent the 3D position of mesh vertices over time.
Figure 9. The surface mesh generation process. (a) The rectangular ROI of the reconstructed depth is further divided into smaller square ROIs; (b) a surface mesh is generated by finding the 3D coordinate of the midpoints of smaller ROIs using the average depth value of the region; (c) a selected frame of a video sequence, which shows the motion of the chest wall in a 3D viewer using a mesh model. Green dots represent the 3D position of mesh vertices over time.
Sensors 17 01840 g009
Figure 10. Experimental setup for evaluating the accuracy of the proposed method using a spirometer and a laser line scanner. (a) Volunteers are advised to lay down in the supine position and breath only through the spirometer. The RGB-D camera and laser line projector are placed above the volunteer, and the laser line is projected onto the abdomen area. (b) CareFusion SpiroUSB™spirometer. (c) The configuration of the RGB-D camera and laser line projector.
Figure 10. Experimental setup for evaluating the accuracy of the proposed method using a spirometer and a laser line scanner. (a) Volunteers are advised to lay down in the supine position and breath only through the spirometer. The RGB-D camera and laser line projector are placed above the volunteer, and the laser line is projected onto the abdomen area. (b) CareFusion SpiroUSB™spirometer. (c) The configuration of the RGB-D camera and laser line projector.
Sensors 17 01840 g010
Figure 11. Comparison of respiratory volume measurement (normalized into −1:1 range) using the proposed method (PCA) and a spirometer. Graphs (ad) represent the selected four different datasets. Black dots represent the original data points of the spirometer, while the blue line represents the interpolated data.
Figure 11. Comparison of respiratory volume measurement (normalized into −1:1 range) using the proposed method (PCA) and a spirometer. Graphs (ad) represent the selected four different datasets. Black dots represent the original data points of the spirometer, while the blue line represents the interpolated data.
Sensors 17 01840 g011
Figure 12. Comparison of respiratory motion measurement using the proposed method (PCA) and laser line scanning. Measurements are taken from different places on the projected laser line. The 101st frame of the dataset is selected as the reference frame, and we measure the motion of remaining frames with respect to it until the 200th frame. Graphs (ad) show the motion measurement results of four different datasets.
Figure 12. Comparison of respiratory motion measurement using the proposed method (PCA) and laser line scanning. Measurements are taken from different places on the projected laser line. The 101st frame of the dataset is selected as the reference frame, and we measure the motion of remaining frames with respect to it until the 200th frame. Graphs (ad) show the motion measurement results of four different datasets.
Sensors 17 01840 g012
Figure 13. Normalized cross-correlation (NCC) between PCA and laser scanning across 100 frames. NCC is calculated for each point on the laser line along the X-axis separately.
Figure 13. Normalized cross-correlation (NCC) between PCA and laser scanning across 100 frames. NCC is calculated for each point on the laser line along the X-axis separately.
Sensors 17 01840 g013
Figure 14. Comparison of the proposed PCA-based method and bilateral filtering. (a) A part of the motion comparison graph. The proposed PCA-based method provides a smooth curve, while bilateral filtering gives a rough curve with more temporal noise. (b) Comparison of the proposed PCA-based method and bilateral filtering with laser line scanning.
Figure 14. Comparison of the proposed PCA-based method and bilateral filtering. (a) A part of the motion comparison graph. The proposed PCA-based method provides a smooth curve, while bilateral filtering gives a rough curve with more temporal noise. (b) Comparison of the proposed PCA-based method and bilateral filtering with laser line scanning.
Sensors 17 01840 g014
Figure 15. Respiratory motion graph of a volunteer performing the isovolume maneuver. The opposite phase of the whole thorax and the whole abdomen motion reflect the volume exchange between them.
Figure 15. Respiratory motion graph of a volunteer performing the isovolume maneuver. The opposite phase of the whole thorax and the whole abdomen motion reflect the volume exchange between them.
Sensors 17 01840 g015
Figure 16. Motion comparison graphs generated for a regular respiratory patterns over a longer duration (350 frames). The first 100 frames are used for PCA, and only the first principal component is used as the motion model. All frames are then used for accuracy analysis. Higher accuracy could be achieved even though only the first PC is used for reconstruction. Graphs (a,b) show the motion comparison results of two different datasets.
Figure 16. Motion comparison graphs generated for a regular respiratory patterns over a longer duration (350 frames). The first 100 frames are used for PCA, and only the first principal component is used as the motion model. All frames are then used for accuracy analysis. Higher accuracy could be achieved even though only the first PC is used for reconstruction. Graphs (a,b) show the motion comparison results of two different datasets.
Sensors 17 01840 g016
Figure 17. Motion comparison graphs generated for irregular respiratory patterns over a longer duration (350 frames). The first 100 frames are used for PCA, and the first principal component and first three principal components are used as the motion models, respectively. All frames are then used for accuracy analysis. A large difference appears between the laser scanner and PCA method when we are using only the first principal component. Higher accuracy could be achieved when we are using the first three principal components as the motion model. Graphs (a,b) show the motion comparison results of two different datasets.
Figure 17. Motion comparison graphs generated for irregular respiratory patterns over a longer duration (350 frames). The first 100 frames are used for PCA, and the first principal component and first three principal components are used as the motion models, respectively. All frames are then used for accuracy analysis. A large difference appears between the laser scanner and PCA method when we are using only the first principal component. Higher accuracy could be achieved when we are using the first three principal components as the motion model. Graphs (a,b) show the motion comparison results of two different datasets.
Sensors 17 01840 g017
Table 1. Clinical and demographic information of the volunteers who participated in the experiments.
Table 1. Clinical and demographic information of the volunteers who participated in the experiments.
VolunteerGenderAge (years)BMI (kg/m 2 )Datasets
1M2926.4D01, D02
2M3228.7D03
3M2627.4D04, D05
4M2721.5D06, D07
5M2526.9D08
6M2826.5D09
7M2719.3D10, D11
8M2424.3D12, D13
9M3020.9D14
10M2524.0D15
Table 2. Motion error of the proposed PCA-based method compared to laser line scanning calculated on five locations of the laser line for 15 datasets. All data are given in mm.
Table 2. Motion error of the proposed PCA-based method compared to laser line scanning calculated on five locations of the laser line for 15 datasets. All data are given in mm.
PositionParametersD01D02D03D04D05D06D07D08D09D10D11D12D13D14D15Average
P1Average0.230.660.180.270.390.360.360.210.830.450.320.360.940.430.550.44
Max.0.922.690.661.140.961.411.410.771.911.451.051.471.891.241.511.37
Standard deviation0.190.660.130.240.220.320.320.160.470.310.230.290.500.270.380.31
P2Average0.390.340.330.520.221.090.470.470.850.460.300.500.520.970.380.52
Max.1.101.340.841.620.661.871.371.311.721.340.791.551.562.511.381.40
Standard deviation0.250.310.210.380.160.400.300.330.460.320.190.340.400.660.290.33
P3Average0.310.850.420.500.590.410.440.740.780.700.400.631.040.570.640.60
Max.1.091.901.181.291.391.281.591.811.971.831.031.892.551.561.821.61
Standard deviation0.250.440.260.320.340.310.340.440.500.460.260.470.650.360.400.39
P4Average0.420.270.280.510.340.400.360.381.181.550.690.500.740.490.410.57
Max.0.951.380.771.720.910.901.031.242.453.181.521.521.861.111.021.44
Standard deviation0.210.270.190.460.230.240.260.270.710.670.320.430.430.320.240.35
P5Average0.320.430.330.290.510.890.530.700.370.430.380.700.870.630.730.54
Max.0.892.230.961.610.971.681.461.591.271.351.022.042.161.631.791.51
Standard deviation0.220.490.210.270.230.370.400.400.290.330.250.590.560.370.420.36
Table 3. Accuracy comparison of the proposed method with related respiratory motion tracking methods.
Table 3. Accuracy comparison of the proposed method with related respiratory motion tracking methods.
SystemAccuracy
Synchrony [17]< 1 . 5  mm
ExacTrac [19,67]< 1 . 0  mm
Calypso [18]< 1 . 5  mm
Yang et al. [23] 1 . 1 ± 0 . 8  mm
Chen et al. [41] 4 . 25 ± 3 . 49 %
Alnowami et al. [55] 3 . 1 ± 0 . 6  mm
Proposed Method 0 . 53 ± 0 . 25  mm

Share and Cite

MDPI and ACS Style

Wijenayake, U.; Park, S.-Y. Real-Time External Respiratory Motion Measuring Technique Using an RGB-D Camera and Principal Component Analysis. Sensors 2017, 17, 1840. https://doi.org/10.3390/s17081840

AMA Style

Wijenayake U, Park S-Y. Real-Time External Respiratory Motion Measuring Technique Using an RGB-D Camera and Principal Component Analysis. Sensors. 2017; 17(8):1840. https://doi.org/10.3390/s17081840

Chicago/Turabian Style

Wijenayake, Udaya, and Soon-Yong Park. 2017. "Real-Time External Respiratory Motion Measuring Technique Using an RGB-D Camera and Principal Component Analysis" Sensors 17, no. 8: 1840. https://doi.org/10.3390/s17081840

APA Style

Wijenayake, U., & Park, S. -Y. (2017). Real-Time External Respiratory Motion Measuring Technique Using an RGB-D Camera and Principal Component Analysis. Sensors, 17(8), 1840. https://doi.org/10.3390/s17081840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop