Next Article in Journal
The Effect of Temperature Field on Low Amplitude Oscillatory Flow within a Parallel-Plate Heat Exchanger in a Standing Wave Thermoacoustic System
Previous Article in Journal
Diffraction-Based Optical Switching with MEMS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion of Intraoperative 3D B-mode and Contrast-Enhanced Ultrasound Data for Automatic Identification of Residual Brain Tumors

1
Telematics (CA), Engineering Division (DICIS), University of Guanajuato, Campus Irapuato-Salamanca, Carr. Salamanca-Valle km 3.5 + 1.8, Comunidad de Palo Blanco, Salamanca, Gto. 36885, Mexico
2
Department of Neurosurgery, University Hospital Leipzig, Leipzig 04103, Germany;
3
Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig 04103, Germany
4
Centro de Investigacion en Matematicas (CIMAT), A.C., Jalisco S/N, Col. Valenciana, Guanajuato, Gto. 36000, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2017, 7(4), 415; https://doi.org/10.3390/app7040415
Submission received: 15 February 2017 / Revised: 11 April 2017 / Accepted: 17 April 2017 / Published: 19 April 2017

Abstract

:
Intraoperative ultrasound (iUS) imaging is routinely performed to assist neurosurgeons during tumor surgery. In particular, the identification of the possible presence of residual tumors at the end of the intervention is crucial for the operation outcome. B-mode ultrasound remains the standard modality because it depicts brain structures well. However, tumorous tissue is hard to differentiate from resection cavity borders, blood and artifacts. On the other hand, contrast enhanced ultrasound (CEUS) highlights residuals of the tumor, but the interpretation of the image is complex. Therefore, an assistance system to support the identification of tumor remnants in the iUS data is needed. Our approach is based on image segmentation and data fusion techniques. It consists of combining relevant information, automatically extracted from both intraoperative B-mode and CEUS image data, according to decision rules that model the analysis process of neurosurgeons to interpret the iUS data. The method was tested on an image dataset of 23 patients suffering from glioblastoma. The detection rate of brain areas with tumor residuals reached by the algorithm was qualitatively and quantitatively compared with manual annotations provided by experts. The results showed that the assistance tool was able to successfully identify areas with suspicious tissue.

1. Introduction

Nowadays, brain tumor surgeries are guided using neuronavigation systems, which are commonly based on anatomical preoperative 3D MR data together with functional data. Such systems assist accurately the first steps of the operation, which consist of locating the tumor under the skull and defining the opening access. However, right after skull opening, the craniotomy and dura mater opening, the brain tissue shifts up to 2 cm. The tumor location and shape indicated in the preoperative MR data is not accurate anymore. Experienced neurosurgeons use their knowledge about the haptic and the visual information of the tumors in comparison to the surrounding edema and brain, for the orientation, preparation and definition of the tumor borders. However, some tumors have complex irregular shapes, and parts can be hidden in the backside of anatomical structures. Intraoperative imaging is therefore crucial to provide the surgeon with an update representation of the current tumor state during the operation. Modern intraoperative imaging modalities for neurosurgery are MR, fluorescence and ultrasound modalities. Intraoperative MR imaging delivers image data of quality similar to pre- and post-operative datasets. However, investment and follow-up costs limit its use to a few hospitals only. Fluorescence imaging requires the oral taking by patients of a contrast agent, 5-aminolevulinic acid (5-ALA). This substance accumulates in malignant tumor cells and is emitted as a red fluorescence under blue light excitation (400 nm). Margins of surface tumor are visualized in the operating microscope view during the operation. The main limitations of this technique are: (1) the high cost of the drug and (2) the visualization of brain and tumor surface only. Therefore, intraoperative ultrasound (iUS) imaging is the most used imaging modality during brain tumor operations. Ultrasound devices have the advantage of being easy to use in the operating room and provide the benefit of real-time visualization of the brain anatomical structures. Hence, extra image acquisitions may modify the surgical workflow a little. Additionally, they are relatively low cost in comparison to other medical imaging systems. This intraoperative modality is routinely used to guide brain tumor operations. Particularly, iUS aims at identifying the presence of possible tumor residuals at the end of the operation, in order to remove as much tumor tissue as possible [1,2]. This is a crucial aspect since several studies showed that a gross-total resection has a positive impact on the progression-free survival of patients. Figure 1 illustrates the surgeon using a US probe placed at the patient open head surface during the intraoperative US image acquisition.
Intraoperative B-mode ultrasound (iB-mode) remains the most popular modality used to support brain tumor surgery, but is not always suitable. Hence, specific brain tumors (e.g., glial tumors) are often represented by a weak contrast, and the exact position of tumor boundaries is hard to define. Furthermore, the tumor residuals, which are located beyond the borders of the resection cavity, are hardly differentiable from blood and artifacts. The use of an ultrasound contrast agent to enhance brain tumor tissue and residual tumor is currently being developed. The technique is not new; contrast enhanced ultrasound (CEUS) imaging is routinely performed, and it was already tested in other medical areas like breast tumor diagnosis [3,4], liver lesions [5,6], renal masses [7,8,9] or blood vessel identification [10,11,12]. Additionally, improvements of brain tumor tissues and tumor residuals enhancement by using CEUS were effectively demonstrated by several recent studies [13,14,15,16,17].
However, the identification of tumor residuals in the iUS data remains in general complex for the expert-eye. Depending on the position of the tumor within the patient’s head, the resection cavity, as well as other cerebral structures like blood vessels, potentially ventricles and bone structures, are usually well depicted in the iB-mode image data. However, the possible tumor residuals are hardly differentiable from other hyperechogenic structures, like the border of the resection cavity, blood or artifacts (Figure 2, left). Furthermore, it is only attempted to enhance the blood vessels and vascularized structures, like tumors, in the iCEUS image data. Furthermore, the borders of the resection cavity, which are important structures that are needed to analyze the images correctly, are hardly or not at all visible (Figure 2, right). The combination of the information in the iB-mode and iCEUS image data, also called data fusion, can support the identification of tumor residuals.
Image fusion consists of combining relevant information from various source images of the same scene into a single resulting image called the “fused image”. The aim of fusion is to preserve specific details of the source images within the fused image to obtain a better representation and understanding of the scene. In theory, three levels of image fusion can be distinguished: the pixel level, the feature level and the decision level [18,19]. The initial level is known as the lowest level because it directly involves the pixels of the source images. The second level utilizes features or objects extracted from source images. The highest level involves decision rules. This technique is largely used in many applications like remote sensing [20,21], computer vision [22,23] and medical imaging [24,25]. In the medical field, image fusion is mainly applied to provide a high quality in patient data representation by using images from different modalities. The objectives of image fusion are mainly the improvement of the image’s contrast and rectification of image degradation. Image fusion is performed using various fundamental methods. Das et al. [26] combined a non-subsampled contourlet transform (NSCT) with a reduced pulse-coupled neural network and fuzzy logic technique to overcome the image fusion problems such as contrast reduction and image degradations. Zhu et al. employed a dictionary learning approach [27]. Due to the limited and redundant information in image patches created by using traditional dictionary learning methods, an alternative scheme of image patch sampling and clustering was proposed. Then, the K-SVD algorithm was used for training of patch groups into compact sub-dictionaries, which were then combined into a complete dictionary. Furthermore, a multimodal (CT/MRI) image fusion method based on NSCT was introduced by Bhatnagar et al. [28]. The resultant low and high-frequency coefficients were respectively combined through the phase congruency and directive contrast-based models. Then, the inverse NSCT was applied on composite coefficients to recover the fused image. Since nature-inspired techniques became popular in computer vision, they have been applied extensively in medical image fusion. Xu et al. [29] have fused multimodal medical images by means of adaptive pulse-coupled neural networks (PCNN). They proposed automatic and optimum parameters tuning of the PCNN model by using the quantum-behaved particle swarm optimization algorithm. In the same fashion, the swarm intelligence of the ant colony and neural network was used for fusing images from PET, MRI, SPECT and MRI modalities [30]. The loss of edges and directional information often occurs during feeding of neural network inputs. Therefore, to solve this problem, the ant colony optimization and statistical scaling techniques were respectively used to detect and enhance the image’s edges before the neural network training and testing. Above all, the image fusion has demonstrated its effectiveness for planning and intraoperative interventions, especially in neurosurgery. Fusion techniques allow, in this context, augmenting the visualization of anatomical structures that are depicted only in one imaging modality or to monitor in time the evolution of a disease. For instance, the CT-MR fusion images were used by Nemec et al. [31] for supporting the surgeon to improve the surgical performance of temporal bone tumors. Furthermore, Prada et al. [32] presented the fusion imaging between preoperative MRI and iUS for intra-operative ultrasound-based navigation in the context of brain tumor removal. The combination of the MRI, characterized by good spatial resolution and a wide field of view, and the iUS that provides real-time status of the brain enables improvement of surgical outcomes. By the same token, an interesting review concerning image fusion for precise target detection in radiosurgery, neurosurgery and hypofractionated radiotherapy was presented in [33]. It is pointed out that the mixture of images such as MR and CT is useful to avoid the damage to the nerves and blood vessels, to accurately locate tumors and to follow-up on the postoperative treatment.
In this technical paper, we are concerned with the development of an image-processing approach to aid the surgeon with the identification of brain areas including residual brain tumor based on both 3D iB-mode and 3D-iCEUS imaging. Our approach retraces the neurosurgeon’s process for interpreting the iUS image data. It is based on two assumptions. The tumor residuals are located beyond the resection cavity wall (for patients who overcame a gross total resection). Additionally, the tumor residuals are enhanced in the iUS image data. However, they are hardly distinguishable from blood, cavity borders and artifacts in the iB-mode image data. Therefore, the method consists of extracting relevant information from both iB-mode and CEUS modalities using automatic segmentation techniques and of fusing them according to rules to keep the tumor residuals. This procedure corresponds to the second and third level fusion methods. In the proposed methodology, the suspect tissues are overlaid on the original 3D B-mode US to facilitate clinical interpretation. In this way, the physician decision regarding the tumor removal task can be optimized. To the best of our knowledge, this is the first time that a computer-assisted approach has been proposed to aid neurosurgeons in the detection of residual tumor cells based on iUS imaging. However, it is important to note that this work was tested “offline” on a limited database of patient images.
In the next section, the materials involved in this study and the image fusion approach proposed for detecting residual brain tumor are described. The results obtained from the performed experiments are presented and analyzed in Section 3. Experiment results are discussed in Section 4. Finally, Section 5 provides the conclusions of this work.

2. Materials and Methods

2.1. Patient Image Dataset

At the end of brain tumor operations, 3D iB-mode and 3D iCEUS data were acquired using a neuronavigation system (SonoNavigator, Localite, Sankt Augustin, Germany) coupled with an ultrasound device (AplioXG, Toshiba Medical Systems Europe, Zoetermeer, The Netherlands). The resection cavity was filled with physiological liquid for the propagation of the ultrasound waves. A large linear array transducer (contact area: 13 mm × 46 mm; range of frequency: 4.8 to 11.0 MHz; average frequency: 8 MHz; frame rate of the 2D ultrasound images: 29 fps (frames per second)) was positioned through the skull opening, in contact with the brain surface and the resection cavity surface. The surgeon scanned the cerebral region of interest with the 2D ultrasound transducer whose position was followed by the navigation system’s optical tracking module. A 3D ultrasound volume was then reconstructed from the 2D slices by the neuronavigation system. The 3D iCEUS data were obtained by injecting 4.8 mL of an intravenous ultrasound contrast agent (SonoVue, Bracco s.p.a, Milan, Italy) at a rate of 3.0 mL/min using a syringe pump (ACIST VueJect, Bracco s.p.a, Milano, Italy) and the contrast harmonic imaging (CHI) modality [10]. The contrast agent injection was performed via the central venous catheter positioned in the vena jugularis interna. In the original 2D ultrasound images, the pixel size is 0.422 mm × 0.422 mm, and the voxel size of the reconstructed 3D volumes is 1 × 1 × 1 mm 3 .
An image database of patients with different kinds of tumors has been collected by the Department of Neurosurgery at the University Hospital of Leipzig, in the context of a previous research project funded by the German Research Society (Deutsche Forschungsgemeinschaft) and accepted by the ethics commission of the University of Leipzig. Twenty three patients were included in this “offline” analysis based on intraoperative ultrasound images (see Table 1), the histopathology of a glioblastoma WHO Grade IV and a planned gross total or subtotal resection. Glioblastomas are tumors that infiltrate the brain tissues, and their borders with healthy tissues are unclear. Therefore, the removal of the whole tumor is a complex task for the surgeon. Possible tumor residuals in the 3D i-Bmode and iCEUS data were manually segmented by four experts (neurosurgeons and scientists), who have had experience with intraoperative ultrasound imaging of brain tumors (image data acquisition and analysis) for more than seven years. The task was performed using radiological findings and postoperative MR image data. For four patients, no tumor was visible in the iUS and MR image data. Radiological findings are medical reports provided by radiologists in which possible operation complications (for example blooding) and the presence of possible remnants of tumor tissue are described. These reports are routinely achieved based on postoperative MR data.

2.2. Image Fusion for Residual Brain Tumor Identification

The approach to automatically identify tumor residuals based on iUS image data is depicted in (Figure 3). It consists of automatically segmenting target structures in the image data and then of optimally fusing them to keep only those that provide relevant information. The target structures, i.e., the residual tumors, are highlighted in both B-mode and CEUS modalities. Therefore, gray-level intensities were chosen as the feature for extracting tumor tissue.
A preprocessing stage was previously carried out by extracting foreground masks for both images (i.e., B-mode and CEUS). Given an image I ( i , j , k ) where the background contains voxels of value zero, the mask M is obtained as M = I ( i , j , k ) > 0 . Then, erosion filters (with cubic/quadratic structuring elements of 9 × 9 × 3 and 3 × 3 × 1 for B-mode and CEUS, respectively) were applied on these masks. The multiplication of original images by the filtered masks was performed for removing artifacts located at the image’s border and due to the contact of the ultrasound transducer with the brain surface.
In a second step, high intensity structures in the iUS data were automatically extracted using the Otsu multilevel thresholding method [34,35]. The Otsu method is one of the better and stable thresholding algorithms, which can be reliably applied on real images. Its uniformity results in bi-level thresholding allowing one to separate the objects and background by maximizing the between class variance [36]. Multilevel thresholding segments a level gray image into several distinct homogeneous regions by increasing the number N of estimated thresholds ( T i ). The quantity of segmented classes is related to the number of estimated thresholds by N - 1 . It should be noticed that N should have a moderate value in order that multi-thresholding algorithms could get reliable results. In the proposed implementation, N is not recommended to go higher than five; unfortunately, thresholding algorithms cannot automatically determine the number of thresholds [37], and the number of thresholds has to be fixed, focalizing the targeted regions.
In the 3D iB-mode images the highlighted structures are mainly borders of the resection cavity including blood and possible tumor residuals, but also blood vessels, bone structures and artifacts. In the 3D iCEUS images, they mainly consist of tumor residuals and vascular structures. The number of classes for Otsu thresholding method was experimentally set to four and three for iB-mode and iCEUS, respectively. Additionally, the voxels classified in the highest intensity class were kept as the target (i.e., tumor remnant). Lastly, a post-processing stage based on the opening filter (with structuring element of 3 × 3 × 1 ) was applied to reduce small false positive regions detected by the algorithm. The opening operation consists of an erosion followed by a dilation step, such that f g = ( f g ) g , where f is the image and g the structuring function.
For identifying suspicious brain tissue, the decision level fusion is performed based on expert knowledge. The main idea consists of selecting the structures that are enhanced in the 3D iCEUS images and that are located in the neighborhood of the cavity border as depicted in the 3D iB-mode images (Figure 4). This operation is performed by keeping the intersection of the segmented regions in both modalities. Let X and Y be the extracted structures from X and Y, respectively. With X X and Y Y , the fused image is obtained via the decision rules described as follows:
Z ( i , j , k ) = 1 , if X ( i , j , k ) Y ( i , j , k ) = 1 ; 0 , otherwise
where Z ( i , j , k ) , X ( i , j , k ) and Y ( i , j , k ) represent the voxels of volume Z, X and Y , respectively.

2.3. Validation

2.3.1. Qualitative Validation

The brain areas’ locations, automatically detected by the algorithm, are compared with the manual annotations of tumor residuals (i.e., ground truth). The following code, A/B, was used to qualitatively assess the performance of the approach. The score A provides the degree of success of the algorithm for the detection of residual tumors. A score of 1 indicates that all areas including tumor tissue were identified. A score of 0 means that a part of the total number of manually-annotated regions was detected. Additionally, a score of −1 indicates the failure of the algorithm. The second score B (−1 or 1) reveals the additional detection of false positives (FP) by the algorithm, i.e., healthy structures misclassified as remnant tumorous structures. The score of +1 indicates the presence of FP, while the value of −1 shows the absence of FP. It is noteworthy that in the case of patients without tumor residuals, the first score A is omitted. Hence:
  •  1 /−1 : all tumorous regions detected;
  •  0 /−1 : a part of tumor residuals detected;
  • −1/−1 : detection failure;
  •   1 / 1 : all tumorous regions detected and extra suspect regions (FP), as well;
  •   0 / 1 : a part of tumorous structures detected and FP, as well;
  • −1 / 1 : extraction only of FP;
  •     /−1 : patient without tumor residuals and no FP detected;
  •      / 1 : patient without tumor residuals and FP extracted.

2.3.2. Quantitative Validation

Residuals of tumor extracted by our algorithm were quantitatively compared with manual annotations considered as ground truth. Manual segmentation in the iUS data is a complex task due to the unclear representation of tumorous structure borders. Therefore, the method validation was done in two steps, namely the comparison of (1) the localization of areas containing the tumor residuals and (2) voxel classification.
First, the tumorous structures detected by the algorithm and the manual annotations were enclosed in 3D bounding boxes. The overlap coefficient ( O v e r l a p ) of these boxes was used as a similarity measure to assess the spatial localization of tumor residuals as proposed by Dollar et al. [38]. Indeed, an O v e r l a p value of 1 is reached when one box is completely enclosed in the other one. Moreover, a value of 0 occurs when there is no intersection between both boxes. Several boxes were used when different disconnected regions were detected. The final O v e r l a p index was the average of indices calculated for each box. According to the application, this coefficient allows one to evaluate detection methods through a binary output based on a threshold value (i.e., detected or no detected). For instance, threshold values of 0.3 and 0.5 were set for target detection in [39] and [40], respectively. Thus, in our application, a threshold value of 0.5 has been selected for evaluating the proposed approach. The task of tumor residuals’ detection was considered as succeed when O v e r l a p 0 . 5 and failed otherwise. This evaluation methodology, as illustrated in the 3D iUS images in Figure 5 for 3 patients (1, 6 and 16), provides information about the intersection rate between the two volume boxes. The green and red bounding boxes encompass respectively the brain areas identified by the algorithm and the ground truths. This similarity measure is described as follows:
O v e r l a p = B B a l B B g t m i n ( B B a l , B B g t )
where B B a l and B B g t are the bounding boxes enclosing the brain areas detected by the algorithm and those manually annotated (ground truth), respectively.
Second, the additional metrics, including accuracy ( A c c ), area under the ROC curve ( A U C ) [41] and error rate ( E r r ) or percentage of wrong classifications [42], were calculated to evaluate the voxels classification as the tumor residual or healthy tissue by the method. This evaluation was carried out by interactively defining a region of interest enclosing the resection cavity where the remnant tumors can be found. Furthermore, these metrics were computed only for the cases where the method succeeds to identify tumor residuals based on the first quantitative metric (i.e., O v e r l a p 0 . 5 ). These similarity measures take values in the interval [0,1]. A c c and A U C values of 1, and E r r of 0 value represent the best performance of the algorithm. They are calculated as:
A c c = T P + T N T P + T N + F P + F N
A U C = 1 2 ( T N T N + F P + T P T P + F N )
E r r = F P + F N T P + T N + F P + F N
where T P , T N , F P and F N are:
True positive (voxels correctly classified as tumorous tissue), true negative (voxels correctly classified as healthy tissue), false positive (healthy tissue misclassified as tumor region) and false negative (undetected tumorous tissue), respectively. It is important to note that the A c c is correlated to the E r r , but they were used for easy interpretation of the final results in term of accuracy or error rates.

3. Experimental Results

This section provides the evaluation results of the proposed method for automatically identifying possible brain tumor residuals. The implementation was performed with the Mevislab software development kit. The method was tested “offline” on the data of 23 patients with glioblastoma where 19 patients (Set A, Patients 1 to 19) presented tumor residuals, while no remnant tumor tissue was indicated for the remaining four patients (Set B, Patients 20 to 23).

3.1. Evaluation of the Influence of the Class Number in the Segmentation Step

The performance of the system is dependent on the setting of parameters such as the class number (multilevel thresholding Otsu method) and the filter window sizes in the erosion and opening operations. The influence of the class number on the segmentation results was estimated. Eight setting possibilities of class numbers were analyzed. Additionally, the notation α β was adopted to represent the class numbers in B-mode and CEUS, respectively. Figure 6 shows the mean values of A U C and A c c calculated on the patient set using these eight configurations. It can be clearly observed that the highest A c c is achieved by selecting a large number of classes (e.g., 5–5). On the other hand, the highest A U C is obtained with a low number of classes (e.g., 3–2). When α and β increase, the system becomes more selective or less sensitive. This means that the probability to detect highlighted structures, including tumor residuals and other hyperechogenic structures, is reduced. On the contrary, it becomes more sensitive when α and β decrease (large values of A U C ). Here, the probability to detect these highlighted structures is maximized. The first objective of the tool is rather the tumor remnants’ localization and not accurate segmentation. Therefore, the optimal number of classes should be obtained when a balance between high values of both A c c and A U C is reached. A trade-off was obtained by setting α and β to the values of 4 and 3.

3.2. Method Evaluation

The outcomes, obtained by the automatic proposed method, are presented in Figure 7, Figure 8 and Figure 9. In addition, the algorithm results (in green) and the ground truths (in red) are overlaid on a selected slice of the 3D iB-mode images for visualization purposes. Table 2 summarizes the qualitative and quantitative evaluation. The former is based on expert observations, and the latter is performed by using the overlap, accuracy, area under the curve and error rate measures. The experiments showed that our approach succeeded in detecting the position of all tumor remnant areas in 15 out of 19 patients ( O v e r l a p 0 . 5 ). For these cases, a qualitative coding of 1/−1 (all tumorous regions were detected) or 1/1 (all tumorous regions were detected and extra suspected regions, as well) was observed. Regarding the four unsuccessful cases, the areas with tumorous tissue were partially detected in two patients (Patients 2 and 7, where O v e r l a p < 0 . 5 ), and the algorithm failed in the two other cases (Patients 14 and 18, where O v e r l a p = 0 ). One failure reason is the position of tumor residuals near the image top (Patients 7 and 18). These areas are removed in the preprocessing steps to eliminate artifacts caused by the US probe. The method was also tested on patient data from the set B where false positives were detected in the cases of Patients 20 and 23 and none for Patients 21 and 22.
Additionally, three cases that include false positives were found (Patients 4, 14, 18). These areas correspond to hyperechogenic structures (for example, bone and blood on the cavity border) in both iB-mode and iCEUS image data, and they are therefore extracted by the method. However, when the false positives are detected in areas far away from the resection cavity (e.g., Patients 4 and 18), these outcomes do not affect the clinical interpretation of the data because tumor residuals can be found only in the cavity.
In general, the quantitative metric used for estimating the tumor residuals’ localization sustains the expert classifications. O v e r l a p values lower than 0.5 were obtained when areas with tumor residuals were partly or not detected by the approach (Patients 2, 7, 14 and 18). However, the absolute value of the O v e r l a p coefficients does not provide a quality rate about the segmentation of tumor remnants. For instance, a value of 1 was reached for Patient 10 because the boxes were included in each other, but this case does not show the best visual result. The other metrics measure objectively the voxel classification quality. The highest accuracy values ( A c c 0 . 97 ) and lowest error rate ( E r r < 0 . 03 ) were obtained for Patients 1, 3, 6, 16, 17 and 19, because the algorithm detected correctly most of the true positives. Moreover, good accuracy scores ( 0 . 93 A c c 0 . 96 ) and error rates ( E r r < 0 . 08 ) were reached in the cases of Patients 4, 8, 9, 11, 12, 13 and 15. Additionally, the lowest scores ( A c c of 0.8105 and 0.8794, E r r of 0.1895 and 0.1206) were achieved for Patients 5 and 10. In addition, the A U C rates show how well true positives and false positives can be properly distinguished by the method.

4. Discussion

4.1. General Approach

The automatic detection of brain areas including tumor residuals is based on the representation of tumor tissue in iB-mode and iCEUS image data. Ultrasound contrast enhancement is visible only in vascularized tissue, like tumors or vascular structures. Therefore, these structures are easily distinguishable from surrounding lobar parenchyma in the iCEUS data. In some cases, local brain tissue edema and local small blood layers show a slight enhancement, but they are different in echogenity to normal tissue in B-mode. Besides, with our linear probe, we have focused on the tumor and the surrounding tissue. Therefore, the basal ganglia area was mainly out of our focus, and in the remaining cases, we found no remarkable higher enhancement. Because this region plays a relevant role in the brain [43], a study taking into account the problem of tumor residual detection based on CEUS close to this area is important in the future. However, the iCEUS modality is still at the evaluation stage for brain tumor applications. The comparison of highlighted areas in the iCEUS data with their histological findings on the same patient dataset was performed previously [16]. A sensitivity of 85% and a specificity of 28% were obtained. Moreover, the evaluation of the approach was performed using manual segmentation, the reliability of which is questionable. As described previously, four experts (neurosurgeons and scientists) with experience with intraoperative ultrasound imaging were involved in the manual segmentation. Postoperative MR data and radiological findings were used to confirm the annotations. Even if the certainty about the manual segmentation of tumor residuals has not been proven, no better validation method is currently available. Therefore, first, a global quantitative evaluation method based on an overlap similarity measure was used. It quantified the position agreement of two regions, rather than the number of common elements (or voxels). This method is more suitable when uncertainties on the target regions are obvious. Second, additional metrics were used to evaluate the method in terms of voxel classification.
The manual and quantitative validation results showed three limitations in the approach of tumor residual detection. Firstly, the current algorithm may miss residual tumors. Secondly, the algorithm extracts extra regions, which were not labeled as residual tumors by the expert. Thirdly, regions including tumor remnants segmented by the algorithm and in the ground truth have different sizes and positions. These three points are discussed in the next paragraphs.

4.2. Influence of the Parameter Values in the Algorithm

The surgeon is sterile during the operation. Therefore, tactile interactions with the software have to be limited. Fixed values for the parameters are required to increase the automation of tools. Values of parameters in the pre-processing and post-processing steps were experimentally defined. High intensity structures, in particular tumor residuals, are finer and thinner in the CEUS image data. Thus, filters of smaller sizes than those needed in B-mode are required to keep them. The other parameters of the method are the number of classes in the segmentation process. Our tests showed that the A c c values increase with a larger number of classes, while at the same time, the A U C values decrease. Large A U C values lead to the detection of many voxels labeled as tumor residuals, but as well as many false positives, which is not wished. The choice of 4–3 classes showed a good compromise. This can be interpreted as the ability of the algorithm to localize regions with tumor residuals in the images, rather than to provide an accurate segmentation of the tumor remnants. Moreover, the B-mode modality is able to represent much more information (different anatomical structures) than the CEUS technique (contrast agent). This explains that the optimal class number is smaller in CEUS than in B-mode.

4.3. Failure of Residual Tumor Identification

The method failed at identifying the residual tumors correctly in four out of 19 patients (Patients 2, 7, 14 and 18). A first reason for failing is the image quality. The approach was tested on 3D US volumes built based on acquired 2D images. The 3D reconstruction algorithm makes use of smoothing functions; therefore, hyperechogenic structures appear to have lower contrasts in the 3D volumes. Moreover, the time window of maximal contrast agent enhancement in the CEUS image data is short, and the 3D acquisition requires a couple of seconds. This maximal enhancement time point may be missed during the acquisition. This image quality drawback can be addressed by using directly raw data (2D images). However, with the current neuronavigation system used at the clinical University of Leipzig, we have only access to the 3D iUS reconstructed volume and not to the original 2D iUS data. A second reason is due to the algorithm itself. Artifacts located at the image borders are removed in the preprocessing step (Section 2.2). In addition, through this process, tumor areas can be lost, as well. Therefore, improvements in the pre-processing step and in the characterization of tumor residuals in the iUS images are needed.

4.4. Extraction of Extra Regions by the Algorithm

Figure 10 depicts an example where extra regions, here the falx, are identified by the fusion method. These structures are obviously not tumorous tissue because they are located far from the resection cavity. Moreover, the elongated and indented shape of the extracted region is not characteristic of tumor residuals whose shape is rather compact. However, this area was enhanced in the 3D iB-mode and iCEUS image data and therefore extracted by the algorithm. A semi-automatic approach could be suggested by interactively defining a region of interest enclosing the surrounding of the resection cavity in order to limit the search volume of tumor residuals. Furthermore, Figure 10 gives an illustration of the results reached with the automatic and semi-automatic methods for a specific case. The first and second rows show the results obtained by using the automatic and semi-automatic methods, respectively. The automatic method result is sufficient for the neurosurgeon, because he/she refers to his knowledge to extract the correct information among the set that the algorithm suggests. Moreover, the semi-automatic process could be automated by extracting the hole of the cavity.

4.5. Differences between the Brain Areas Detected by the Algorithm and in the Ground Truth

The quantitative evaluation showed that brain areas detected by the algorithm and segmented by the experts have different positions and sizes. The algorithm extracts essentially image regions with high intensities. On the other hand, the experts considered in addition the postoperative MR data and the radiological findings to refine the regions including tumor tissue. The extraction of additional features (e.g., texture and shape) could improve the tissue classification by using the automatic approach. In conclusion, our approach is capable, at this current step, to point out suspicious brain areas in the iUS images rather than to segment the residuals of tumors. A better characterization of tumor tissue by using shape descriptors and additional intraoperative ultrasound modalities, like ultrasound perfusion, should improve the performance of automatic methods.

5. Conclusions

The problem of identifying the presence or the absence of residual brain tumor in iUS image data was addressed in this work. Our hypothesis is: (1) residual tumorous tissue is most of the time located beyond the borders of the resection cavity, which is well visible in B-mode modality and (2) tumor tissue is highlighted in B-mode and CEUS modalities. Firstly, the approach consists of extracting relevant information from the iUS image data. Moreover, secondly, it allows keeping possible tumor remnants using image fusion techniques. Two kinds of evaluation were performed, i.e., in terms of region localization containing the tumor residuals and in terms of the voxel being correctly classified. The experiment showed that the method was able to successfully localize brain regions, which possibly include tumor residuals for 15 out of 19 patients (Set A). Average values of the accuracy, the area under the ROC curve and the error rate were 0.9507, 0.7351 and 0.0493, respectively. A better characterization of the tumor residuals including texture descriptors, for example, and additional intraoperative ultrasound modalities should improve the performance of the new automatic approaches. Our approach represents a considerable advance in the computer-assisted surgery field for automatic detection of residual brain tumors. Nevertheless, at this stage, it is important to note that the method was tested “offline”, and it is still far from clinical application. Future works will focus on method improvements and on its validation of a large patient database.

Acknowledgments

This work has been supported by the National Council of Science and Technology of Mexico (CONACYT) under Grant Number 493442. The authors would like to thank the department of neurosurgery, University Hospital Leipzig, for the clinical study and data collection in the context of a previous research project funded by the German Research Society (Deutsche Forschungsgemeinschaft). The University of Guanajuato, Engineering Division, Campus Irapuato-Salamanca, is recognized for providing the necessary funds for covering the costs to publish in open access.

Author Contributions

Claire Chalopin and Dirk Lindner designed the project. Felix Arlt performed the data acquisition on patients during brain tumor surgeries. Elisee Ilunga-Mbuyamba, Horacio Rostro-Gonzalez, Ivan Cruz-Aceves and Juan Gabriel Avina-Cervantes developed and implemented the image processing and visualization tools to address the problem. Elisee Ilunga-Mbuyamba and Claire Chalopin contributed equally to the paper writing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moiyadi, A.V.; Shetty, P. Direct navigated 3D ultrasound for resection of brain tumors: A useful tool for intraoperative image guidance. Neurosurg. Focus 2016, 40, E5. [Google Scholar] [CrossRef] [PubMed]
  2. Selbekk, T.; Jakola, A.S.; Solheim, O.; Johansen, T.F.; Lindseth, F.; Reinertsen, I.; Unsgård, G. Ultrasound imaging in neurosurgery: Approaches to minimize surgically induced image artefacts for improved resection control. Acta Neurochir. 2013, 155, 973–980. [Google Scholar] [CrossRef] [PubMed]
  3. Xiao, X.; Dong, L.; Jiang, Q.; Guan, X.; Wu, H.; Luo, B. Incorporating Contrast-Enhanced Ultrasound into the BI-RADS Scoring System Improves Accuracy in Breast Tumor Diagnosis: A Preliminary Study in China. Ultrasound Med. Biol. 2016, 42, 2630–2638. [Google Scholar] [CrossRef] [PubMed]
  4. Masumoto, N.; Kadoya, T.; Amioka, A.; Kajitani, K.; Shigematsu, H.; Emi, A.; Matsuura, K.; Arihiro, K.; Okada, M. Evaluation of Malignancy Grade of Breast Cancer Using Perflubutane-Enhanced Ultrasonography. Ultrasound Med. Biol. 2016, 42, 1049–1057. [Google Scholar] [CrossRef] [PubMed]
  5. Friedrich-Rust, M.; Klopffleisch, T.; Nierhoff, J.; Herrmann, E.; Vermehren, J.; Schneider, M.D.; Zeuzem, S.; Bojunga, J. Contrast-Enhanced Ultrasound for the differentiation of benign and malignant focal liver lesions: A meta-analysis. Liver Int. 2013, 33, 739–755. [Google Scholar] [CrossRef] [PubMed]
  6. Kim, T.; Jang, H. Contrast-enhanced ultrasound in the diagnosis of nodules in liver cirrhosis. World J. Gastroenterol. 2014, 13, 3590–3596. [Google Scholar] [CrossRef] [PubMed]
  7. Barr, R.G.; Peterson, C.; Hindi, A. Evaluation of Indeterminate Renal Masses with Contrast-enhanced US: A Diagnostic Performance Study. Radiology 2014, 271, 133–142. [Google Scholar] [CrossRef] [PubMed]
  8. Cai, Y.; Du, L.; Li, F.; Gu, J.; Bai, M. Quantification of Enhancement of Renal Parenchymal Masses with Contrast-Enhanced Ultrasound. Ultrasound Med. Biol. 2014, 40, 1387–1393. [Google Scholar] [CrossRef] [PubMed]
  9. Houtzager, S.; Wijkstra, H.; de la Rosette, J.J.M.C.H.; Laguna, M.P. Evaluation of Renal Masses with Contrast-Enhanced Ultrasound. Curr. Urol. Rep. 2013, 14, 116–123. [Google Scholar] [CrossRef] [PubMed]
  10. Ilunga-Mbuyamba, E.; Avina-Cervantes, J.G.; Lindner, D.; Cruz-Aceves, I.; Arlt, F.; Chalopin, C. Vascular Structure Identification in Intraoperative 3D Contrast-Enhanced Ultrasound Data. Sensors 2016, 16, 497. [Google Scholar] [CrossRef] [PubMed]
  11. Prada, F.; Del Bene, M.; Saini, M.; Ferroli, P.; DiMeco, F. Intraoperative cerebral angiosonography with ultrasound contrast agents: How I do it. Acta Neurochir. 2015, 157, 1025–1029. [Google Scholar] [CrossRef] [PubMed]
  12. Chalopin, C.; Krissian, K.; Meixensberger, J.; Müns, A.; Arlt, F.; Lindner, D. Evaluation of a semi-automatic segmentation algorithm in 3D intraoperative ultrasound brain angiography. Biomed. Eng. 2013, 58, 293–302. [Google Scholar] [CrossRef] [PubMed]
  13. He, W.; Jiang, X.Q.; Wang, S.; Zhang, M.Z.; Zhao, J.Z.; Zhao Liu, H.; Ma, J.; Xiang, D.Y.; Wang, L.S. Intraoperative contrast-enhanced ultrasound for brain tumors. Clin. Imaging 2008, 32, 419–424. [Google Scholar] [CrossRef] [PubMed]
  14. Prada, F.; Perin, A.; Martegani, A.; Aiani, L.; Solbiati, L.; Lamperti, M.; Casali, C.; Legnani, F.; Mattei, L.; Saladino, A.; et al. Intraoperative contrast-enhanced ultrasound for brain tumor surgery. Neurosurgery 2014, 74, 542–552. [Google Scholar] [CrossRef] [PubMed]
  15. Ritschel, K.; Pechlivanis, I.; Winter, S. Brain tumor classification on intraoperative contrast-enhanced ultrasound. Int. J. Comput. Assist. Radiol. Surg. 2015, 10, 531–540. [Google Scholar] [CrossRef] [PubMed]
  16. Arlt, F.; Chalopin, C.; Müns, A.; Meixensberger, J.; Lindner, D. Intraoperative 3D contrast-enhanced ultrasound (CEUS): A prospective study of 50 patients with brain tumours. Acta Neurochir. 2016, 158, 685–694. [Google Scholar] [CrossRef] [PubMed]
  17. Prada, F.; Bene, M.D.; Fornaro, R.; Vetrano, I.G.; Martegani, A.; Aiani, L.; Sconfienza, L.M.; Mauri, G.; Solbiati, L.; Pollo, B.; et al. Identification of residual tumor with intraoperative contrast-enhanced ultrasound during glioblastoma resection. Neurosurg. Focus 2016, 40, E7. [Google Scholar] [CrossRef] [PubMed]
  18. Piella, G. A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion 2003, 4, 259–280. [Google Scholar] [CrossRef]
  19. Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109. [Google Scholar] [CrossRef]
  20. Han, C.; Zhang, H.; Gao, C.; Jiang, C.; Sang, N.; Zhang, L. A Remote Sensing Image Fusion Method Based on the Analysis Sparse Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 439–453. [Google Scholar] [CrossRef]
  21. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
  22. Lu, Z.; Jiang, X.; Kot, A.C. A Color Channel Fusion Approach for Face Recognition. IEEE Signal Process. Lett. 2015, 22, 1839–1843. [Google Scholar] [CrossRef]
  23. Chen, C.; Jafari, R.; Kehtarnavaz, N. Improving Human Action Recognition Using Fusion of Depth Camera and Inertial Sensors. IEEE Trans. Hum. Mach. Syst. 2015, 45, 51–61. [Google Scholar] [CrossRef]
  24. Bhatnagar, G.; Wu, Q.J.; Liu, Z. A new contrast based multimodal medical image fusion framework. Neurocomputing 2015, 157, 143–152. [Google Scholar] [CrossRef]
  25. Liu, X.; Mei, W.; Du, H. Multimodality medical image fusion algorithm based on gradient minimization smoothing filter and pulse coupled neural network. Biomed. Signal Process. Control 2016, 30, 140–148. [Google Scholar] [CrossRef]
  26. Das, S.; Kundu, M.K. A Neuro-Fuzzy Approach for Medical Image Fusion. IEEE Trans. Biomed. Eng. 2013, 60, 3347–3353. [Google Scholar] [CrossRef] [PubMed]
  27. Zhu, Z.; Chai, Y.; Yin, H.; Li, Y.; Liu, Z. A novel dictionary learning approach for multi-modality medical image fusion. Neurocomputing 2016, 214, 471–482. [Google Scholar] [CrossRef]
  28. Bhatnagar, G.; Wu, Q.M.J.; Liu, Z. Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain. IEEE Trans. Multimed. 2013, 15, 1014–1024. [Google Scholar] [CrossRef]
  29. Xu, X.; Shan, D.; Wang, G.; Jiang, X. Multimodal medical image fusion using PCNN optimized by the QPSO algorithm. Appl. Soft Comput. 2016, 46, 588–595. [Google Scholar] [CrossRef]
  30. Kavitha, C.; Chellamuthu, C. Medical image fusion based on hybrid intelligence. Appl. Soft Comput. 2014, 20, 83–94. [Google Scholar]
  31. Nemec, S.F.; Donat, M.A.; Mehrain, S.; Friedrich, K.; Krestan, C.; Matula, C.; Imhof, H.; Czerny, C. CT–MR image data fusion for computer assisted navigated neurosurgery of temporal bone tumors. Eur. J. Radiol. 2007, 62, 192–198. [Google Scholar] [CrossRef] [PubMed]
  32. Prada, F.; Del Bene, M.; Mattei, L.; Casali, C.; Filippini, A.; Legnani, F.; Mangraviti, A.; Saladino, A.; Perin, A.; Richetta, C.; et al. Fusion imaging for intra-operative ultrasound-based navigation in neurosurgery. J. Ultrasound 2014, 17, 243–251. [Google Scholar] [CrossRef] [PubMed]
  33. Inoue, H.K.; Nakajima, A.; Sato, H.; Noda, S.; Saitoh, J.; Suzuki, Y. Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy. Cureus 2015, 7, e252. [Google Scholar] [CrossRef] [PubMed]
  34. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  35. Hui-Fuang, N. Automatic thresholding for defect detection. Pattern Recognit. Lett. 2006, 27, 1644–1649. [Google Scholar]
  36. Sahoo, P.K.; Soltani, S.; Wong, A.K.; Chen, Y.C. A Survey of Thresholding Techniques. Comput. Vis. Graph. Image Process. 1988, 41, 233–260. [Google Scholar] [CrossRef]
  37. Arora, S.; Acharya, J.; Verma, A.; Panigrahi, P.K. Multilevel Thresholding for Image Segmentation through a Fast Statistical Recursive Algorithm. Pattern Recognit. Lett. 2008, 29, 119–125. [Google Scholar] [CrossRef]
  38. Dollar, P.; Tu, Z.; Perona, P.; Belongie, S. Integral Channel Features. In Proceedings of the British Machine Vision Conference (BMVC), London, UK, 7–10 September 2009. [Google Scholar]
  39. Cherif, I.; Solachidis, V.; Pitas, I. A Tracking Framework for Accurate Face Localization. In Proceedings of the Artificial Intelligence in Theory and Practice: IFIP 19th World Computer Congress, TC 12: IFIP AI 2006 Stream, Santiago, Chile, 21–24 August 2006; Springer US: Boston, MA, USA, 2006; pp. 385–393. [Google Scholar]
  40. Everingham, M.; Eslami, S.M.A.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The pascal visual object classes challenge: A retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
  41. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  42. Shafiee, M.J.; Siva, P.; Fieguth, P.; Wong, A. Embedded Motion Detection via Neural Response Mixture Background Modeling. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 837–844. [Google Scholar]
  43. Lanciego, J.L.; Luquin, N.; Obeso, J. Functional Neuroanatomy of the Basal Ganglia. Cold Spring Harb. Perspect. Med. 2012, 12, 233–260. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Intraoperative 2D image acquisition with an ultrasound (US) probe placed at the patient open head surface during brain tumor surgery.
Figure 1. Intraoperative 2D image acquisition with an ultrasound (US) probe placed at the patient open head surface during brain tumor surgery.
Applsci 07 00415 g001
Figure 2. Intraoperative B-mode ultrasound (iB-mode) (left) and iCEUS (contrast enhanced ultrasound (CEUS)) (right) patient image data acquired at the end of a brain tumor operation.
Figure 2. Intraoperative B-mode ultrasound (iB-mode) (left) and iCEUS (contrast enhanced ultrasound (CEUS)) (right) patient image data acquired at the end of a brain tumor operation.
Applsci 07 00415 g002
Figure 3. Image processing approach for brain tumor residual identification. The method is subdivided into four main steps. First, an image preprocessing is performed for removing the ultrasound image border by using erosion filters. Second, highlighted structures are extracted in both imaging modalities by applying the Otsu multi-level thresholding method. Third, segmented structures are combined via a fused rule defined by Equation (1). Finally, a post-processing stage is performed to remove small structures detected that are in general false positives.
Figure 3. Image processing approach for brain tumor residual identification. The method is subdivided into four main steps. First, an image preprocessing is performed for removing the ultrasound image border by using erosion filters. Second, highlighted structures are extracted in both imaging modalities by applying the Otsu multi-level thresholding method. Third, segmented structures are combined via a fused rule defined by Equation (1). Finally, a post-processing stage is performed to remove small structures detected that are in general false positives.
Applsci 07 00415 g003
Figure 4. Image fusion approach for residual brain tumor identification. The border of the resection cavity and highlighted structures are respectively extracted from B-mode and CEUS. Afterwards, they are combined on the feature-level fusion step. Finally, the expected result is obtained by selecting only specific structures based on the rules defined in the decision-level fusion step.
Figure 4. Image fusion approach for residual brain tumor identification. The border of the resection cavity and highlighted structures are respectively extracted from B-mode and CEUS. Afterwards, they are combined on the feature-level fusion step. Finally, the expected result is obtained by selecting only specific structures based on the rules defined in the decision-level fusion step.
Applsci 07 00415 g004
Figure 5. 3D representation of the quantitative evaluation approach on Patients 1, 6 and 16. B B d is the algorithm result’s bounding box, and B B g t is the ground truth’s bounding box.
Figure 5. 3D representation of the quantitative evaluation approach on Patients 1, 6 and 16. B B d is the algorithm result’s bounding box, and B B g t is the ground truth’s bounding box.
Applsci 07 00415 g005
Figure 6. A U C and A c c performance rates computed for several numbers of class configurations in B-mode and CEUS.
Figure 6. A U C and A c c performance rates computed for several numbers of class configurations in B-mode and CEUS.
Applsci 07 00415 g006
Figure 7. Results of residual tumor identification from Patients 1 to 6. The results obtained with the proposed automatic method (in green) and in the manual segmentation (in red) are overlaid on a selected slice of the 3D iB-mode image data. The algorithm missed tumorous structures in Patient 2 and identified extra regions in Patient 4.
Figure 7. Results of residual tumor identification from Patients 1 to 6. The results obtained with the proposed automatic method (in green) and in the manual segmentation (in red) are overlaid on a selected slice of the 3D iB-mode image data. The algorithm missed tumorous structures in Patient 2 and identified extra regions in Patient 4.
Applsci 07 00415 g007
Figure 8. Results of residual tumor identification from Patients 7 to 12. The results obtained by using the proposed automatic method (in green) are superimposed with the expert manual segmentation (in red). The algorithm missed the detection of other tumorous structures in the case of Patient 7, and it identified a large region in the case of Patient 10.
Figure 8. Results of residual tumor identification from Patients 7 to 12. The results obtained by using the proposed automatic method (in green) are superimposed with the expert manual segmentation (in red). The algorithm missed the detection of other tumorous structures in the case of Patient 7, and it identified a large region in the case of Patient 10.
Applsci 07 00415 g008
Figure 9. Results of residual tumor identification from Patients 13 to 19. The results obtained by using the proposed automatic method (in green) are overlaid with the expert manual segmentation (in red). The algorithm missed completely the target in the case of Patients 14 and 18. In addition, it detected an extra region in the case of Patient 15.
Figure 9. Results of residual tumor identification from Patients 13 to 19. The results obtained by using the proposed automatic method (in green) are overlaid with the expert manual segmentation (in red). The algorithm missed completely the target in the case of Patients 14 and 18. In addition, it detected an extra region in the case of Patient 15.
Applsci 07 00415 g009
Figure 10. Results of residual tumor identification from Patient 4: automatic versus semi-automatic approaches. The automatic proposed method where the white arrows show extra regions detected by the algorithm (Row 1). Correction of over residual tumor identification by using a semi-automatic method based on an ROI (Row 2). The algorithm outcomes (in green) are superimposed with the expert manual segmentation (in red).
Figure 10. Results of residual tumor identification from Patient 4: automatic versus semi-automatic approaches. The automatic proposed method where the white arrows show extra regions detected by the algorithm (Row 1). Correction of over residual tumor identification by using a semi-automatic method based on an ROI (Row 2). The algorithm outcomes (in green) are superimposed with the expert manual segmentation (in red).
Applsci 07 00415 g010
Table 1. Brain tumor data from patients, such as: location, side and size of the tumor.
Table 1. Brain tumor data from patients, such as: location, side and size of the tumor.
PatientLocationSideTumor Size in mL
1frontotemporalleft45.3
2temporalright73.5
3frontalright11.5
4temporalleft26.8
5frontalleft14.7
6temporalleft9.6
7parietalleft24.4
8frontalleft30.6
9frontalleft11.5
10frontalright30.3
11occipitalleft55.6
12frontalleft15.1
13frontalright43.6
14frontalright33.0
15temporalright33.4
16frontalright41.7
17parieto-occipitalright46.9
18frontalleft23.3
19frontalright72.2
20parietalleft40.9
21frontalleft1.5
22frontalleft17.9
23parieto-occipitalleft22.9
Table 2. O v e r l a p , accuracy ( A c c ), area under the curve ( A U C ) and error rate ( E r r ) measures obtained from the identification of residual brain tumors by using the proposed data fusion approach. O v e r l a p values above 0.5 indicate the successful localization of the residual tumor (success = 1), and those under this threshold value mean failure (success = 0). Patients 1 to 19 presented tumor residuals, while tumor tissue was completely removed during the operation for Patients 20 to 23.
Table 2. O v e r l a p , accuracy ( A c c ), area under the curve ( A U C ) and error rate ( E r r ) measures obtained from the identification of residual brain tumors by using the proposed data fusion approach. O v e r l a p values above 0.5 indicate the successful localization of the residual tumor (success = 1), and those under this threshold value mean failure (success = 0). Patients 1 to 19 presented tumor residuals, while tumor tissue was completely removed during the operation for Patients 20 to 23.
PatientQualitative Overlap Success Acc AUC Err
11/−1 0.530710.98790.84050.0121
20/−1 0.30000
31/−1 0.687510.97950.8990.0205
41/1 0.666610.94930.76500.0507
51/−1 0.755110.81050.84420.1895
61/−1 0.691310.97770.88030.0223
70/−1 0.25710
81/−1 0.888810.96180.62960.0382
91/−1 0.850010.96990.66420.0301
101/−1 1.000010.87940.89540.1206
111/−1 0.505310.95280.53670.0472
121/−1 1.000010.95220.52690.0478
131/−1 0.717310.96970.62570.0303
14−1/1 00
151/−1 0.722210.93470.65710.0653
161/−1 0.774110.98640.78690.0135
171/-1 0.800010.97210.59140.0279
18−1/1 00
191/−1 0.646410.97660.88370.0234
20/1
21/−1
22/−1
23/1

Share and Cite

MDPI and ACS Style

Ilunga-Mbuyamba, E.; Lindner, D.; Avina-Cervantes, J.G.; Arlt, F.; Rostro-Gonzalez, H.; Cruz-Aceves, I.; Chalopin, C. Fusion of Intraoperative 3D B-mode and Contrast-Enhanced Ultrasound Data for Automatic Identification of Residual Brain Tumors. Appl. Sci. 2017, 7, 415. https://doi.org/10.3390/app7040415

AMA Style

Ilunga-Mbuyamba E, Lindner D, Avina-Cervantes JG, Arlt F, Rostro-Gonzalez H, Cruz-Aceves I, Chalopin C. Fusion of Intraoperative 3D B-mode and Contrast-Enhanced Ultrasound Data for Automatic Identification of Residual Brain Tumors. Applied Sciences. 2017; 7(4):415. https://doi.org/10.3390/app7040415

Chicago/Turabian Style

Ilunga-Mbuyamba, Elisee, Dirk Lindner, Juan Gabriel Avina-Cervantes, Felix Arlt, Horacio Rostro-Gonzalez, Ivan Cruz-Aceves, and Claire Chalopin. 2017. "Fusion of Intraoperative 3D B-mode and Contrast-Enhanced Ultrasound Data for Automatic Identification of Residual Brain Tumors" Applied Sciences 7, no. 4: 415. https://doi.org/10.3390/app7040415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop