Next Article in Journal
Mechanically Derived Tissue Stromal Vascular Fraction Acts Anti-inflammatory on TNF Alpha-Stimulated Chondrocytes In Vitro
Previous Article in Journal
Chimeric Oncolytic Adenovirus Armed Chemokine Rantes for Treatment of Breast Cancer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net

by
Nicola Altini
1,*,†,
Antonio Brunetti
1,2,†,
Valeria Pia Napoletano
1,
Francesca Girardi
1,
Emanuela Allegretti
1,
Sardar Mehboob Hussain
1,
Gioacchino Brunetti
3,
Vito Triggiani
3,
Vitoantonio Bevilacqua
1,2 and
Domenico Buongiorno
1,2
1
Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, 70126 Bari, BA, Italy
2
Apulian Bioengineering s.r.l., Via delle Violette n.14, 70026 Modugno, BA, Italy
3
Masmec Biomed SpA, Via delle Violette n.14, 70026 Modugno, BA, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2022, 9(8), 343; https://doi.org/10.3390/bioengineering9080343
Submission received: 1 July 2022 / Revised: 13 July 2022 / Accepted: 21 July 2022 / Published: 26 July 2022

Abstract

:
In prostate cancer, fusion biopsy, which couples magnetic resonance imaging (MRI) with transrectal ultrasound (TRUS), poses the basis for targeted biopsy by allowing the comparison of information coming from both imaging modalities at the same time. Compared with the standard clinical procedure, it provides a less invasive option for the patients and increases the likelihood of sampling cancerous tissue regions for the subsequent pathology analyses. As a prerequisite to image fusion, segmentation must be achieved from both MRI and TRUS domains. The automatic contour delineation of the prostate gland from TRUS images is a challenging task due to several factors including unclear boundaries, speckle noise, and the variety of prostate anatomical shapes. Automatic methodologies, such as those based on deep learning, require a huge quantity of training data to achieve satisfactory results. In this paper, the authors propose a novel optimization formulation to find the best superellipse, a deformable model that can accurately represent the prostate shape. The advantage of the proposed approach is that it does not require extensive annotations, and can be used independently of the specific transducer employed during prostate biopsies. Moreover, in order to show the clinical applicability of the method, this study also presents a module for the automatic segmentation of the prostate gland from MRI, exploiting the nnU-Net framework. Lastly, segmented contours from both imaging domains are fused with a customized registration algorithm in order to create a tool that can help the physician to perform a targeted prostate biopsy by interacting with the graphical user interface.

1. Introduction

Prostate cancer is a major health problem and represents the most common cancer in the male population, accounting for 18.5% of all the cancers diagnosed in humans [1]. The number of new cases worldwide passed 1,275,000 and caused approximately 360,000 deaths in 2018 alone (3.8% of all deaths caused by cancer in men) [2]. Numerous imaging modalities are employed for prostate cancer diagnosis, treatment, and follow up. Transrectal ultrasound (TRUS), magnetic resonance imaging (MRI), and computed tomography (CT) are the most common employed imaging modalities [3]. Each technique provides different information and is used for several divergent clinical scopes. During biopsy procedures, TRUS is commonly employed since it is an inexpensive, portable, and real-time methodology [4]. MRI is mainly adopted for diagnosis and treatment planning [5]. In fact, this modality has better soft tissue contrast and allows more efficient lesion detection and staging in patients affected by prostate cancer. As can be seen from Figure 1, the TRUS images suffer from problems such as speckle, low contrast, and shadow artifacts [6]. Calcification and acoustic shadowing make the automatic segmentation of prostate region a very complex task [7]. The prostate usually appears like a hypoechoic mass encompassed by a hyperechoic region [8]. CT scans are useful in determining if prostate cancer has spread to bone tissues or to assess the effectiveness of the brachytherapy [9,10].
In clinical practice, the majority of prostate cancer cases are diagnosed prior to symptoms development, thanks to the prostate-specific antigen (PSA) [11] levels in the blood and rectal examination. In order to achieve detailed information, MRI is the preferred modality, with PI-RADS v2.1 being the standard for finding interpretation [12].
The standard prostate biopsy procedure involves the extraction of 10–12 tissue samples. Since there is no guarantee that sampling prostate in these regions is the most effective way to obtain the regions with cancerous tissue, fusion-guided prostate biopsy is now becoming the preferred modality for most urologists and surgeons. In this way, suspicious areas found in the MRI of the prostate can be targeted during the prostate procedure exploiting the fusion with real-time TRUS imaging, also allowing a better view of the biopsy needle. The advantages of this approach include the following: more accurate sampling of the cancerous tissue in the prostate gland; less patient tissue is extracted; less pain; less risks for the patient, including faster recovery time [13].
In order to implement a fusion-guided prostate biopsy framework, segmentation of the prostate gland must be obtained from both TRUS and MRI domains. Exceptions involve systems in which images are manually registered by the user at procedure time, by superimposing MRI over the TRUS. Since MRI is acquired days before the prostate biopsy, it is not fundamental that its segmentation is performed in real time. Th manual segmentation of prostate from MRI is a tedious task and prone to inter- and intra-radiologist variation [14], so the exploitation of an automatic method grounded on the nnU-Net framework [15] can further ease the procedure and improve its diagnostic accuracy. It is worth noting that nnU-Net does not denote a novel network topology, loss function, or training procedure. Indeed, nnU-Net stands for “no new net”. The strength of the nnU-Net framework comes from the systematization of all the steps which were usually manually tuned in the training pipeline of semantic segmentation architectures, including data augmentation, hyperparameters’ tuning, test-time augmentation, and ensembling. Instead, manual segmentation of the prostate gland from TRUS has to be realized real time during the prostate procedure; therefore, the need for a fast and effective methodology for this task is crucial in clinical practice.
Ghose et al. performed a comprehensive survey which focused on methods for prostate segmentation in TRUS, MR, and CT images [3]. Prostate gland segmentation eases multimodal image fusion for tumor localization in biopsy. Manual annotation of radiological images is a tedious and error-prone task, which also has problems of inter- and intra-radiologist variability. Fully automatic methods, such as those based on deep learning (DL), require huge annotated data, usually for the same transducer that will be used for the procedures, since there is a high variability in ultrasound image quality across vendors. Nonetheless, when data are available, DL methodologies show their strength, as is the case for deep attentional features (DAF) [16] and DAF 3D [17]. The shortcoming of these techniques is that they cannot be applied before having acquired a dataset with images of the same ultrasound device that will be adopted during procedures.
Mahdavi et al. [18] proposed a semiautomatic prostate segmentation method that can be applied in prostate brachytherapy setups. The 3D geometric model of the prostate is created based on prior knowledge of the shape of the gland and on the assumption that the prostate has a tapered ellipsoidal shape and is slightly warped posteriorly due to the presence of the TRUS probe. They used a tapered and warped ellipsoid as the prior shape of the prostate gland. The proposed segmentation algorithm requires a manual initialization of the physician: on the mid-gland image, the user selects six boundary points following a specific criterion. The main disadvantage of this method is that it requires the user to input initialization points in a very precise way, and relies deeply on these points, posing problems if they are slightly inaccurately placed or when the prostate region has an irregular shape.
Gong et al. [19] incorporated deformable superellipses in a Bayesian segmentation framework, exploiting an edge-detection algorithm for discovering prostate boundaries. They show the capacity of deformable superellipses to capture the prostate shape in various anatomical zones. The main limitation of this method is that it requires an initial contour that is similar to the real boundaries of the prostate gland. To overcome this issue, Saroul et al. [20] proposed a variational approach, exploiting the implicit representation of a superellipse for modeling the active contour.
This work addresses the problem of providing a fast and reliable semiautomatic method for prostate segmentation from TRUS images, that can be employed without having acquired a specific dataset for the transducer in consideration. The approach, like those mentioned before, is based on the theory of deformable superellipses. Two kinds of methodologies can be employed for achieving segmentation with superellipses. In the first, image characteristics, as edge maps or region energy, are employed for performing automatic image segmentation [19,20]. In the second case, geometry of the prostate is inferred from user-defined points [18].
The method of Gong et al. [19] requires the generation of edge maps in order to produce the final segmentation, whereas the approach of Saroul et al. [20] needs assumptions to be made about region energy. On the other hand, the proposed approach does not need to make these assumptions on input data, given that image quality can vary widely among different transducers.
Compared with the approach of Mahdavi et al. [18], the proposed method does not require to select points in a refined geometric way, but the user has flexibility on how to place them. Moreover, the proposed formulation allows the modeling of more irregular 3D shapes, since the user can insert points in an arbitrary number of slices, thus eventually obtaining shapes which go beyond tapered and warped ellipsoids.
Extensive experiments have been carried out to outline the best guidelines that a human operator should take into consideration when using the proposed algorithm for performing a procedure in order to minimize the number of points required and maximize the segmentation accuracy. In any case, performing a second iteration can mitigate eventual problems that arise after non-optimal points are placed in the first iteration.
Lastly, an application of the proposed method in an image fusion setup with MRI is shown. The segmentation module for the MRI relies on the nnU-Net framework [15]. Segmentation masks from both domains are then registered in order to accomplish the image fusion task.
The remainder of the paper is structured as follows: Section 2 describes the materials and methods considered for this study, including the dataset and the developed methodologies for segmentation from MRI and TRUS, and the setup of MRI–TRUS registration. Thereafter, Section 3 presents the results for both the segmentation and the registration tasks. Section 4 discusses the obtained results, and Section 5 concludes the paper and offers future perspectives.

2. Materials and Methods

2.1. Dataset

A dataset containing anonymized imaging data of human prostate of N = 3 patients was made publicly available by Fedorov et al. [21]. This dataset will be referred to as ZENODO throughout this paper. For each patient, both MRI and TRUS examinations were performed. The former was performed with the purpose of staging the disease and the latter was performed with the aim of completing a volumetric examination to prepare the brachytherapy implant. Both modalities are 3D scalar images. Annotations provided by Fedorov et al. include manual segmentation of the whole prostate gland for both MRI and TRUS, and fiducial points placed in specific anatomical sites to improve subsequent image registration and fusion. In particular, fiducials are placed at the urethra entry into the prostate at base (UB), the verumontanum (VM), and the urethra entry into the prostate at apex (UA).
In order to validate DL models for the task of MRI prostate segmentation, the datasets PROMISE12 [14] and SAML [22,23] were also included in the analysis.
The ZENODO dataset was employed to test the proposed method for TRUS segmentation, MRI segmentation, and TRUS–MRI registration, whereas PROMISE12 and SAML were employed to validate the nnU-Net model for MRI segmentation.
Sample images for both domains, TRUS and MRI, are reported in Figure 2. A summarized table for the considered materials is provided in Table 1.
Table 1. Datasets description. As imaging modalities, only ZENODO contains TRUS images, even in a very limited quantity, being only 3. Fiducial points are also present only in the last dataset. Seg—segmentation; Reg—registration.
Table 1. Datasets description. As imaging modalities, only ZENODO contains TRUS images, even in a very limited quantity, being only 3. Fiducial points are also present only in the last dataset. Seg—segmentation; Reg—registration.
DatasetImaging
Modality
TaskNumber
of Images
Ground Truth
Segmentation
Fiducial
Points
File
Format
PROMISE12 [14]MR (T2W)MR Seg50NIfTI
SAML [22,23]MR (T2W)MR Seg116NIfTI
ZENODO [21]MR (T2W),
TRUS
TRUS Seg,
MR/TRUS Reg
3, 3NRRD

2.2. Workflow

The workflow employed for achieving image fusion, starting with segmentation for both imaging modalities, namely TRUS and MRI, is reported in Figure 3. In the clinical practice, segmentation does not happen at the same time, since MRI segmentation can be achieved preoperatively, whereas the TRUS segmentation has to be obtained intraoperatively, at the start of the prostate biopsy procedure.
Segmentation from MRI involves a preprocessing stage, so that images can be fed to a deep learning architecture, the nnU-Net. Lastly, a postprocessing is performed, so that noisy elements can be removed from images (e.g., only one connected component is expected), increasing segmentation accuracy. The described operations can be carried out in a fully automatic way. MR images are especially important for identifying target region for biopsy, since they have better contrast than other imaging modalities. Details are described in Section 2.3.
Segmentation from TRUS is achieved with a semiautomatic algorithm, which requires input points from the user. The physician has to annotate points in at least three slices of the prostate gland in axial planes, taking care when placing points in the deformed zones (the transducer itself introduces deformation). Starting from this point, a deformable superellipse is fitted with an optimization algorithm. Then, interpolation is employed to achieve 3D reconstruction of the prostate gland. The entire procedure is explained in Section 2.4.
Then, having both segmentation masks from TRUS and MRI modalities, registration can be performed, enabling image fusion, which allows tissue coming from both modalities to be seen at the same time. Optionally, a set of anatomical landmarks can be inserted by the user to ease and constrain the registration optimization step. The procedure is presented in Section 2.5.

2.3. MRI Segmentation

The semantic segmentation of the prostate gland from MRI can be efficiently met via DL techniques, as fully convolutional neural networks [24]. Semantic segmentation, which poses the basis for subsequent classification and characterization tasks [25,26], is essential in numerous clinical applications including artificial intelligence in diagnostic support systems, therapy planning, intraoperative assistance, and monitoring of tumor growth. It is a computer vision task that can be computed with DL algorithms and consists of labeling each pixel of an input image, without recognizing the different instances of objects [27,28]; it is possible to see semantic segmentation as a problem of conversion from image to image, where the input image is the original image and each pixel intensity value of the output image indicates the relation of that pixel to the associated class [29].
Most semantic segmentation architectures are based on encoder–decoder networks. The encoder is devoted into the process of feature extraction or subsampling. Decoding is an upsampling operation, in which the spatial information output from the encoding layer is reconstructed, increasing the spatial resolution. The encoder–decoder structures have been implemented in different convolutional network architectures, including SegNet [30], U-Net [31], U-Net 3D [32], and V-Net [33]. Besides prostate segmentation, applications in medical imaging tasks of those architectures encompass liver vessels delineation [34], segments classification [35], lung COVID-19 lesions segmentation [36], and vertebrae segmentation [37].
In this work, in order to perform the semantic segmentation of the prostate gland from MRI, the nnU-Net framework has been exploited. It allows the semantic segmentation tasks to be approached with standardized pipelines [15,38], and its architecture is based on those of U-Net and U-Net 3D. It was originally conceived during the Medical Decathlon Segmentation Challenge [39], where it emerged as the leading approach in all tasks. Advantages of this method consist of automatic configurations of preprocessing, data augmentation, training, inference, and postprocessing. Parameters to set for training nnU-Net include number of epochs, initial learning rate, batch size, patch size, and the sum of dice loss and cross-entropy as a loss function.

2.4. TRUS Segmentation

Segmentation of anatomical structures in noisy data, such as TRUS images, is a complex task since boundaries are not clearly defined, as shown in Figure 1. Therefore, the adoption of a prior information about the geometric structure of interest is useful to constrain the model deformation [40,41]. Deformable models can be used to achieve this result.
Geometry, physics, and mathematical optimization lay the foundations for the segmentation algorithms based on deformable models [40]. The constraint on the model shape is derived from geometry, where the evolution of the shape in space is governed by physical theories, whereas in order to fit the model to the accessible data, the optimization theory is employed [42]. Segmentation of anatomical structures in deformable models is achieved by exploiting an energy minimization framework. Two kinds of energies are considered: internal and external energies. The deformable model is propagated in the direction of the object contours by external energies, whereas the smoothness of the boundaries are preserved by internal energies. The deformable model framework comprises of various methodologies, that, according to Ghose et al. [3], can be categorized into deformable mesh, active-shape models, level sets, active-contour models, and curve-fitting models. More advanced approaches may include a mixture of these techniques, with the idea that merging information concerning boundaries known a priori—the region, shape, and features of the prostate region—can provide more accurate models, such as the deformable superellipse formulation of Gong et al. [19].

2.4.1. Shape Models

In a wide variety of medical imaging scenarios, the general location, orientation, and shape of the object of interest are known a priori. As reported by previous studies concerning TRUS images, prostate contours appear smooth and with a closed, near-convex shape [19]. This information can be embedded into the deformable model in different forms: as initial conditions, as a way of constraining model shape parameters, and as a model fitting procedure. Global shape properties can me modeled with parametric shape models. The advantage of this technique is that it does not require the presence of anatomical landmarks.
Furthermore, representation of the shapes can be tackled with many different methods [43,44]. For instance, Tutar et al. [45] proposed that the 3D prostate boundaries could be modeled with spherical harmonics of degree eight. Local deformations can be controlled, thanks to the exploitation of parameters, leading to the capacity of modeling complex shapes. Additionally, there is an increment in the computational complexity.
Similarly, reducing the number and range of parameters can allow the global shape to be modeled in approaches which are stable and quick from a numerical perspective, leading to more compact representations. In the following section, the authors introduce the deformable superellipse, a powerful model for the geometry of the prostate gland [19]. When the deformable superellipse is not capable of properly modelling all the nuances of the prostate region in a 2D slice, bidimensional B-splines [46] can be employed in the proposed approach, obtaining very refined results while retaining the possibility of modelling a 3D shape with a relatively low number of parameters.

2.4.2. Deformable Superellipses

Superellipses allow ellipses to be generalized in a natural way. Different base geometrical shapes can be modeled through superellipses, as ellipses, parallelograms, rectangles, and pinched diamonds by handling a small number of parameters [47,48]. Examples of shapes that can be modeled by superellipses are portrayed in Figure 4. The 3D generalization of the superellipse, the superellipsoid, has not been considered since it makes assumptions about the 3D regularity of the prostate shape which are too simplistic.
A centered superellipse can be expressed in the following parametric form, reported in Equation (1):
x = a x · | c o s ( θ ) | 2 ϵ · s i g n ( c o s ( θ ) ) y = a y · | s i n ( θ ) | 2 ϵ · s i g n ( s i n ( θ ) )
where the size parameters a x > 0 , a y > 0 define the length of the semi axes, and ϵ > 0 specifies the squareness in 2D plane, as shown in Figure 4. The corresponding implicit form is given by the Equation (2):
x a x ϵ + y a y ϵ = 1
The insideoutside function is reported in Equation (3):
f ( x , y ) = x a x ϵ + y a y ϵ
where if f ( x , y ) = 1 , then the point ( x , y ) lies on the superellipse; if f ( x , y ) > 1 , then the point ( x , y ) lies outside the superellipse; if f ( x , y ) < 1 , then the point ( x , y ) lies inside the superellipse.
The superellipse model does not allow, in this version, all deformations which are required to build a proper model of the prostate gland to be obtained. Nonetheless, geometric deformations can result in a variety of shapes which are modeled by deformable superellipse, as translation, rotation, tapering, and bending [49,50]. Moreover, these transformations can be modeled with a few number of parameters, given that translation with respect to an axis, rotation, tapering, and bending are described each with a single parameter. Deformable superellipses can then be characterized by a parameter vector p , defined as in Equation (4):
p = { a x , a y , l x , l y , r , ϵ , t , b }
where ϵ is the squareness parameter and a x , a y are the semi-axes lengths defined above. Other parameters are those involved in the global similarity transformations for superellipses [19]: l x , l y are the translations along x and y axes, r is the rotation angle, and t and b the tapering and circular bending on the y-axis, respectively.
Details of all these geometric transformations are reported in Appendix A, whereas inverse transformations are reported in Appendix B. Example of deformable superellipse modeled by variations in tapering and bending are reported in Figure 4.

Optimization Framework

In the Bayesian framework proposed in Gong et al. [19], the authors assumed that some parameters (those concerning shape) have a Gaussian distribution as prior, N ( p s ) , whereas others (those concerning pose) have a uniform distribution as before, U ( p p ) . The edge strength likelihood is denoted as E. Then, according to the Bayes rule, the posterior probability can be modeled as in Equation (5):
P r ( p   |   E ) = P r ( p ) · P r ( E   |   p ) P r ( E ) = U n ( p p ) · N ( p s ) · P r ( E   |   p ) P r ( E ) U n ( p p ) · N ( p s ) · P r ( E   |   p )
This results in optimizing the log likelihood in Equation (6):
L = l n ( P r ( p s ) ) + l n ( P r ( E | p ) )

2.4.3. Proposed Approach

In the proposed approach, the deformable superellipse is modeled as specified in Section 2.4.2. Geometric deformations to the base superellipse shape can be obtained as reported in Appendix A.
The problem of modeling P r ( p   |   E ) , as in Equation (5), is that it requires prior data on edge maps from images of the same kind of the ultrasound device that will be used for carrying out the procedures. When it is not feasible to collect such images in advance, it may be preferable to model P r ( p   |   U ) , where U is a set of user-defined points. If the model does not require rigid assumptions about U, it can provide a fast and reliable system for achieving prostate gland segmentation with only moderate user interaction and without the need to build a large training set.
Therefore, in the proposed formulation, the posterior probability can be written, as reported in Equation (7):
P r ( p   |   U ) = P r ( p ) · P r ( U   |   p ) P r ( U ) U n ( p p ) · N ( p s ) · P r ( U   |   p )
The prior about shape parameters can be optimized by maximizing Equation (8) [19]:
l n ( P r ( p s ) ) = j ( p j m j ) 2 2 · σ j 2
Instead, the likelihood linked to the term P r ( U   |   p ) can be maximized by optimizing the energy in Equation (9), where U is the set of user-defined points, C is the polygon representing the prostate mask boundaries, d is the point to polygon distance, and E ( C ; U ) is the energy function to minimize.
E ( C ; U ) = ( x , y ) U d C , ( x , y ) 2
Afterwards, 2D superellipses were fit to slices where users insert points, a 3D model can be reconstructed with the performance of linear interpolation of the parameters involved in the vector p. In order to build a 3D model of the prostate gland, a minimum of three slices have to be labeled. The annotated slices must include base, apex, and mid-gland regions of the prostate gland. On the base and the apex, a minimum of 4 points must be inserted by the user, whereas on the mid-gland, a minimum of 6 is recommended. For mid-glands which have irregular shapes, a number of points up to 12 may be beneficial.
Since the user can add more than three slices, shapes which are more complex than one tapered and warped superellipsoid or two semi-superellipsoids can be obtained. The following paragraph describes how the optimization algorithm is carried out during a procedure with an operator.

2.4.4. Implementation Details

The general workflow employed for TRUS segmentation is reported in Figure 5.
First, the user is asked to select points from at least three slices of the TRUS volume. In every slice, the user has to select a variety of points ranging approximately from 4 to 12, as detailed at the end of Section 2.4.3. In order to ease this process for the experiments realized during this research, the authors created a JSON interface with the popular 3D Slicer software [51].
The user can enter two types of models when inserting points. The first is the superellipse, and the second exploits bidimensional B-splines (as implemented by the method scipy.interpolate.splprep). For the purposes of 3D modeling, a superellipse is then fitted to the spline in the second case. For mid-gland slices, the B-spline configuration, especially when 10–12 points are annotated by the user, is the recommended way to proceed. When there are few annotated points, a deformable superellipse is more likely to properly work, considering that it has a relatively low number of parameters. In particular, in the configuration with the least possible number of points, where the user places 4 points at base, 6 points at mid-gland, and 4 points at apex, the deformable superellipse should be exploited.
In order to effectively implement the optimization procedure of the 2D superellipse to the slice points, an iterative minimization procedure has been carried out. At every iteration, the optimizer passes a vector p of parameters to a superellipse class, which builds an object with the given parameters and measures its energy with respect to the user-defined points.
After the object is created, the insideoutside function reported in Equation (3) is used to create a mask of points which satisfies the condition for the centered superellipse. Then, these points are transformed by using deformations in the following order: rotation, as defined in Equation (A2); linear tapering along y-axis, as defined in Equation (A3); circular bending along y-axis, as defined in Equation (A4); translation, as defined in Equation (A1).
The mask obtained by these transformations is subject to morphological closing operator, since holes arise during the transformation process. Then, the energy for the built superellipse can be defined as the sum of distances from user-defined points to the polygon of the mask boundary. The point-to-polygon distance can be calculated with the pointPolygonTest method from the OpenCV library. At the end of the minimization procedure, the optimizer will find the best vector p for the input points given by the user.
Lastly, the 2D deformable superellipse models fit in multiple slices (at least three including the base, the apex, and the mid-gland) which are employed to reconstruct the 3D volume by performing linear interpolation of the parameters contained in the vector p . The program also returns a list of JSON files which can be loaded in 3D Slicer to refine the segmentation results and eventually perform a second iteration. In the second user iteration, B-splines are employed for providing the contour of the prostate gland, since the user only need to adjust boundary points provided by the previous iteration of the algorithm.

2.5. MRI–TRUS Registration

The described registration algorithm is segmentation-based, so that both MRI and TRUS segmentation masks are required for performing the procedure. Other authors have also considered this step to be fundamental [52,53]. The particular challenge of MRI–TRUS registration is that the anatomical areas visible in one modality may not be visible in the other.
Before starting with the registration procedure, preprocessing is performed with the purpose of improving and easing the fusion algorithm results.
First, the 3D images are cropped into 3D bounding boxes (i.e., volume of interest (VOI)) that extend 10 mm over the margin delineated by the segmentation mask. Then, the VOIs are resampled to make them isotropic with the same resolution for both modalities. For the binary segmentation mask, nearest neighbor interpolator is employed to perform the resampling. Output resolution is set to 0.3 mm × 0.3 mm × 0.3 mm.
Segmentation masks are smoothed with a Gaussian kernel with σ = 3 [53]. Lastly, the Maurer signed-distance transformation, which exploits Euclidean distance transform [54], is applied to the segmentation masks. The steps involved in the registration preprocessing are reported in Figure 6.
The purpose of the initialization is to simplify the calculation of the center of rotation and translation needed for the rigid transformation. Two kinds of initialization have been considered: (i) based on the center of images; (ii) based on a set of landmarks.
In the first case, the centers of images are calculated in the coordinate spatial system, considering the origin, dimensions, and spacing of images. The geometric center of the moving image is given as initial center of the rigid transformation, and the vector that goes from the center of the fixed image to the center of the moving image is given as the initial translation vector.
The second approach, instead, determines an initial transformation by considering a set of landmarks. It determines the optimal transform that can map the fixed image and the moving image with respect to least-square errors of the levels of intensity [55].
Since the proposed approach aims to perform the registration of distance maps whose intensity values have the same range of values and meaning, the dissimilarity measure employed is the sum of squares of intensity differences (SSD). Lower values of the said metric correspond to better results. The optimizer employed is based on gradient descent, whose aim is to find the set of parameters which define a transformation that optimize the metric as well as possible. The overall workflow employed for the registration with all the various components described in this section is portrayed in Figure 7.

2.6. Performance Metrics

The performance of the segmentation and registration algorithms analyzed for this study was evaluated by calculating metrics based on the overlap of volumes and metrics based on the distances of the external surfaces points. The metrics used for volumetric overlap require the predicted volume, P, and the ground-truth volume, G, to be introduced. They were the dice similarity coefficient ( D S C ), the volumetric overlap error ( V O E ), and the relative volume difference ( R V D ), which are defined as in Equations (10)–(12), respectively.
D S C ( P , G ) = 2 · | P | | G | | P | + | G |
V O E ( P , G ) = 1 | P G | | P G |
R V D ( P , G ) = | P | | G | | G |
The metrics based on the concept of surface distances include the Hausdorff distance (HD) and the average symmetric surface distance (ASSD). Definitions for these metrics can be found in [24].

3. Results

3.1. Segmentation

3.1.1. MRI

Quantitative results for MRI segmentation with nnU-Net are reported in Table 2, and the qualitative results, as segmentation masks, are depicted in Figure 8. The nnU-Net model trained on the PROMISE12 challenge obtained the best results [14,15]. The SAML-V dataset was obtained by sampling 24 images for validation from the SAML dataset. It is worth noting that the Dice coefficient is higher than 88% and ASSD is less than 1 mm for both validation sets under consideration, showing the reliability of the nnU-Net framework for the automatic MRI segmentation of the prostate region.

3.1.2. TRUS

Quantitative results for TRUS segmentation with the developed methodology based on deformable superellipses are reported in Table 3, whereas sample segmentation images are depicted in Figure 9. Three experiments have been conducted per each case, placing 4 points on the base and 4 on the apex, using only the superellipse to fit the contours. Instead, on the mid-gland, a number of points varying from 10 to 12 has been considered, exploiting B-splines before fitting the superellipse to finally achieve the 3D modeling of the prostate gland. Results are reported as mean ± std of the experiments performed on each case. It is possible to see that results are considerable overall, with the Dice coefficient being greater than 87% in all cases. Moreover, the proposed implementation is iterative, so that the user can refine the results until it reaches the desired performance. For the purposes of this research, the experiments stopped at second iteration, which allowed the results to be improved in all cases.

3.2. Registration

Quantitative results for registration across the two considered imaging modalities, TRUS and MRI, are reported in Table 4 for the configurations with and without landmarks, respectively. An example of the workflow for the image fusion is depicted in Figure 10. Dice Coefficient is higher than 91% for all the cases, and HD is less than 4 mm, demonstrating that the developed registration method is promising.

4. Discussion

Prostate segmentation is a pivotal but strenuous-to-accomplish task which is required for targeted prostate biopsy procedures. Moreover, every transducer for TRUS can produce different images, resulting in a variety of conditions which make it difficult to transfer what has been learned on one dataset to another. Lastly, the lack of annotated data for TRUS segmentation, from the ZENODO dataset—that consists of only three images, and is the only dataset available for this research—adds to the peculiarity to the task.
The deformable superellipses are shape models that allow a variety of geometry deformations to be modeled starting from ellipses, which can resemble most common prostate shapes. In fact, the prostate shape is characterized by being a tapered ellipsoid [18]. When the procedure is performed, the transducer induces a slightly posterior deformation in the patient’s prostate which can be modeled, for instance, with the bending parameter, b.
Therefore, this work proposed a novel formulation of the deformable superellipse to make it a suitable method for TRUS segmentation, even in the absence of training data from a given transducer. Other approaches, such as that of Gong et al. [19], require edge-detection algorithms; therefore, that approach could be employed for automatic segmentation. However, such an approach requires training data from a specific transducer. The advantage of the proposed method is that it can be applied in any circumstance, only requires a moderate interaction with a physician, and always yields considerable results.
In the experiments carried out for this study, the proposed method requires 41 ± 7 s for placing the points in three or four slices, whereas it takes 5 ± 1 s to build the 3D model. The time needed for the second iteration is more variable—74 ± 32 s. The superellipse implementation of Mahdavi et al. [18] took 32 ± 14 s for initialization, which is similar to the time needed to place the initial user-defined points in the proposed approach. On the computational side, it needed 14 ± 1 s, which is more than the considered implementation. Furthermore, in their case, segmentation refinement can be performed by the user, with a time ranging between 1 and 3 min. It is not possible to directly compare the proposed approach with the work of Gong et al. [19], since their method is capable of performing segmentation in less than 5 s per slice, but it only delineates 2D boundaries.
The developed methodology managed to achieve decent results, reaching a Dice coefficient greater than 87% in all the images considered for the test, coming from the ZENODO dataset. Then, the research focused on proving the applicability of this module in a targeted biopsy setup. So, the nnU-Net framework was employed for the task of performing segmentation from MRI, achieving a Dice coefficient higher than 88% on the SAML-V dataset and higher than 91% on the ZENODO dataset.
Lastly, the authors developed a custom registration procedure, which allowed a Dice coefficient higher than 91% to be reached in all cases, and an HD lower than 4 mm to be reached in all cases, showing the effectiveness of the proposed framework in clinical applications. In the registration framework, both an initializer based on centers of images, and another which relied upon a set of landmarks, were considered. From the obtained results, it is possible to note that the former allowed Dice coefficients of 91.77%, 94.82%, and 93.61% to be reached, and HD of 3.77 mm, 2.12 mm, and 3.55 mm to be reached. The latter achieved Dice coefficients of 91.78%, 94.85%, and 93.60%, and HD of 3.77 mm, 2.09 mm, and 2.29 mm. Hence, the two methods provides similar results; therefore, eventually, a simpler center-based initialization can be adopted for the affine registration procedure.
Overall, the obtained results for both the segmentation methods are satisfactory for implementation in a targeted prostate biopsy setup. The registration framework can eventually be improved, in order to accommodate deformable prostate biopsy setups, which will allow even better results for the image-fusion steps.

5. Conclusions

Prostate segmentation from MRI and TRUS is a complex challenge, but is very useful in clinical setups. With MRI, with the advent of the nnU-Net framework, the challenge is more easily met, since a standardized pipeline can be employed for semantic segmentation. However, there is still a lack of substantial data and standardized methodologies for TRUS images. In this work, the authors proposed an approach that can be employed in the absence of training data; this approach only relies on the theory of deformable superellipses. With the only requirement of a moderate interaction with the user, the developed methodology reliably segments the prostate from TRUS images. In order to show the effectiveness of the overall workflow, as well as in the clinical setups, an image-fusion procedure which relies on image registration between TRUS and MRI was developed. Thus, we have successfully realized a semiautonomous segmentation framework for prostate cancer from TRUS images, without relying on a large-scale dataset. Furthermore, the proposed framework can be employed as an annotation tool to ease and speed up the construction of prostate segmentation datasets, so that eventually fully automated methods can be developed. In future works, deformable registration techniques can be considered to further improve the image-fusion step.

Author Contributions

Conceptualization, N.A., A.B., D.B. and V.B.; methodology, N.A., A.B., V.P.N., F.G., E.A., V.T., D.B. and V.B.; software, N.A. and A.B.; supervision, N.A., A.B., G.B., D.B. and V.B.; validation, G.B. and V.T.; visualization, N.A., V.P.N., F.G. and E.A.; writing—original draft preparation, N.A., A.B., V.P.N., F.G., S.M.H. and D.B.; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset with paired TRUS and MRI images used for this study is publicly available on ZENODO [21]. The data for the PROMISE12 [14,15] and the SAML [22,23] challenges are also publicly available.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ASSD average symmetric surface distance
CTcomputed tomography
DAFdeep attentional features
DLdeep learning
DSCdice similarity coefficient
HDHausdorff distance
MRImagnetic resonance imaging
RVDrelative volume difference
SSDsum of squares of intensity differences
TRUStransrectal ultrasound
UAprostate apex
UBprostate base
USultrasound
VMprostate verumontanum
VOEvolume overlap error

Appendix A. Geometric Transformations

Translation ( l x , l y ):
x = x + l x y = y + l y
Rotation (r):
x = x · c o s ( r ) y · s i n ( r ) y = x · s i n ( r ) + y · c o s ( r )
Linear tapering along y-axis (t):
x = x · t · y a y + 1 y = y
Circular bending along the y-axis (b):
x = ( a y b y ) · s i n x a y b y y = a y b a y b y · c o s x a y b y

Appendix B. Inverse Transformations

Inverse translation ( l x , l y ):
x = x l x y = y l y
Inverse rotation (r):
x = + x · c o s ( r ) + y · s i n ( r ) y = x · s i n ( r ) + y · c o s ( r )
Inverse linear tapering along y-axis (t):
x = x t · y a y + 1 y = y
Inverse circular bending along the y-axis (b):
x = s i g n ( b ) · a r c t a n x y c · ( x ) 2 + ( y a y b ) 2 y = a y b s i g n ( b ) · ( x ) 2 + ( y a y b ) 2

References

  1. World Health Organization. Worldwide cancer data. In World Cancer Research Fund; World Health Organization: Geneva, Switzerland, 2018; pp. 7–12. [Google Scholar]
  2. Rawla, P. Epidemiology of prostate cancer. World J. Oncol. 2019, 10, 63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Ghose, S.; Oliver, A.; Martí, R.; Lladó, X.; Vilanova, J.C.; Freixenet, J.; Mitra, J.; Sidibé, D.; Meriaudeau, F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. Comput. Methods Programs Biomed. 2012, 108, 262–287. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Devetzis, K.; Kum, F.; Popert, R. Recent Advances in Systematic and Targeted Prostate Biopsies. Res. Rep. Urol. 2021, 13, 799. [Google Scholar] [CrossRef] [PubMed]
  5. Bass, E.; Pantovic, A.; Connor, M.; Gabe, R.; Padhani, A.; Rockall, A.; Sokhi, H.; Tam, H.; Winkler, M.; Ahmed, H. A systematic review and meta-analysis of the diagnostic accuracy of biparametric prostate MRI for prostate cancer in men at risk. Prostate Cancer Prostatic Dis. 2021, 24, 596–611. [Google Scholar] [CrossRef]
  6. Zhan, Y.; Shen, D. Deformable segmentation of 3-D ultrasound prostate images using statistical texture matching method. IEEE Trans. Med. Imaging 2006, 25, 256–272. [Google Scholar] [CrossRef]
  7. Singh, R.P.; Gupta, S.; Acharya, U.R. Segmentation of prostate contours for automated diagnosis using ultrasound images: A survey. J. Comput. Sci. 2017, 21, 223–231. [Google Scholar] [CrossRef]
  8. Jones, S.; Carter, K.R. Sonography Endorectal Prostate Assessment, Protocols, and Interpretation. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2021. [Google Scholar]
  9. Bevilacqua, V.; Mastronardi, G.; Piazzolla, A. An Evolutionary Method for Model-Based Automatic Segmentation of Lower Abdomen CT Images for Radiotherapy Planning. In European Conference on the Applications of Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2010; pp. 320–327. [Google Scholar] [CrossRef]
  10. Garg, G.; Juneja, M. A survey of prostate segmentation techniques in different imaging modalities. Curr. Med. Imaging 2018, 14, 19–46. [Google Scholar] [CrossRef]
  11. Stenman, U.H.; Leinonen, J.; Zhang, W.M.; Finne, P. Prostate-specific antigen. Semin. Cancer Biol. 1999, 9, 83–93. [Google Scholar] [CrossRef]
  12. Barrett, T.; Rajesh, A.; Rosenkrantz, A.B.; Choyke, P.L.; Turkbey, B. PI-RADS version 2.1: One small step for prostate MRI. Clin. Radiol. 2019, 74, 841–852. [Google Scholar] [CrossRef]
  13. Marks, L.; Young, S.; Natarajan, S. MRI–ultrasound fusion for guidance of targeted prostate biopsy. Curr. Opin. Urol. 2013, 23, 43. [Google Scholar] [CrossRef] [Green Version]
  14. Litjens, G.; Toth, R.; van de Ven, W.; Hoeks, C.; Kerkstra, S.; van Ginneken, B.; Vincent, G.; Guillard, G.; Birbeck, N.; Zhang, J.; et al. Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Med. Image Anal. 2014, 18, 359–373. [Google Scholar] [CrossRef] [Green Version]
  15. Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  16. Wang, Y.; Deng, Z.; Hu, X.; Zhu, L.; Yang, X.; Xu, X.; Heng, P.A.; Ni, D. Deep attentional features for prostate segmentation in ultrasound. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 523–530. [Google Scholar]
  17. Wang, Y.; Dou, H.; Hu, X.; Zhu, L.; Yang, X.; Xu, M.; Qin, J.; Heng, P.A.; Wang, T.; Ni, D. Deep attentive features for prostate segmentation in 3D transrectal ultrasound. IEEE Trans. Med. Imaging 2019, 38, 2768–2778. [Google Scholar] [CrossRef] [Green Version]
  18. Mahdavi, S.S.; Chng, N.; Spadinger, I.; Morris, W.J.; Salcudean, S.E. Semi-automatic segmentation for prostate interventions. Med. Image Anal. 2011, 15, 226–237. [Google Scholar] [CrossRef] [Green Version]
  19. Gong, L.; Pathak, S.D.; Haynor, D.R.; Cho, P.S.; Kim, Y. Parametric shape modeling using deformable superellipses for prostate segmentation. IEEE Trans. Med. Imaging 2004, 23, 340–349. [Google Scholar] [CrossRef]
  20. Saroul, L.; Bernard, O.; Vray, D.; Friboulet, D. Prostate segmentation in echographic images: A variational approach using deformable super-ellipse and Rayleigh distribution. In Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France, 14–17 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 129–132. [Google Scholar]
  21. Fedorov, A.; Nguyen, P.L.; Tuncali, K.; Tempany, C. Annotated MRI and Ultrasound Volume Images of the Prostate. 2015. Available online: https://zenodo.org/record/16396#.YtpWXoRByUk (accessed on 30 June 2022). [CrossRef]
  22. Liu, Q.; Dou, Q.; Yu, L.; Heng, P.A. Ms-net: Multi-site network for improving prostate segmentation with heterogeneous mri data. IEEE Trans. Med. Imaging 2020, 39, 2713–2724. [Google Scholar] [CrossRef] [Green Version]
  23. Liu, Q.; Dou, Q.; Heng, P.A. Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to Unseen Domains. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Lima, Peru, 4–8 October 2020. [Google Scholar]
  24. Altini, N.; Prencipe, B.; Cascarano, G.D.; Brunetti, A.; Brunetti, G.; Triggiani, V.; Carnimeo, L.; Marino, F.; Guerriero, A.; Villani, L.; et al. Liver, Kidney and Spleen Segmentation from CT scans and MRI with Deep Learning: A Survey. Neurocomputing 2022, 490, 30–53. [Google Scholar] [CrossRef]
  25. Hussain, S.M.; Buongiorno, D.; Altini, N.; Berloco, F.; Prencipe, B.; Moschetta, M.; Bevilacqua, V.; Brunetti, A. Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence. Appl. Sci. 2022, 12, 6230. [Google Scholar] [CrossRef]
  26. Brunetti, A.; Altini, N.; Buongiorno, D.; Garolla, E.; Corallo, F.; Gravina, M.; Bevilacqua, V.; Prencipe, B. A Machine Learning and Radiomics Approach in Lung Cancer for Predicting Histological Subtype. Appl. Sci. 2022, 12, 5829. [Google Scholar] [CrossRef]
  27. Altini, N.; Cascarano, G.D.; Brunetti, A.; Marino, F.; Rocchetti, M.T.; Matino, S.; Venere, U.; Rossini, M.; Pesce, F.; Gesualdo, L.; et al. Semantic Segmentation Framework for Glomeruli Detection and Classification in Kidney Histological Sections. Electronics 2020, 9, 503. [Google Scholar] [CrossRef] [Green Version]
  28. Altini, N.; Cascarano, G.D.; Brunetti, A.; De Feudis, D.I.; Buongiorno, D.; Rossini, M.; Pesce, F.; Gesualdo, L.; Bevilacqua, V. A Deep Learning Instance Segmentation Approach for Global Glomerulosclerosis Assessment in Donor Kidney Biopsies. Electronics 2020, 9, 1768. [Google Scholar] [CrossRef]
  29. Liu, L.; Cheng, J.; Quan, Q.; Wu, F.X.; Wang, Y.P.; Wang, J. A survey on U-shaped networks in medical image segmentations. Neurocomputing 2020, 409, 244–258. [Google Scholar] [CrossRef]
  30. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  31. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  32. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 424–432. [Google Scholar]
  33. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 565–571. [Google Scholar]
  34. Altini, N.; Prencipe, B.; Brunetti, A.; Brunetti, G.; Triggiani, V.; Carnimeo, L.; Marino, F.; Guerriero, A.; Villani, L.; Scardapane, A.; et al. A Tversky Loss-Based Convolutional Neural Network for Liver Vessels Segmentation. In International Conference on Intelligent Computing; Springer: Cham, Switzerland, 2020; pp. 342–354. [Google Scholar] [CrossRef]
  35. Prencipe, B.; Altini, N.; Cascarano, G.D.; Brunetti, A.; Guerriero, A.; Bevilacqua, V. Focal Dice Loss-Based V-Net for Liver Segments Classification. Appl. Sci. 2022, 12, 3247. [Google Scholar] [CrossRef]
  36. Bevilacqua, V.; Altini, N.; Prencipe, B.; Brunetti, A.; Villani, L.; Sacco, A.; Morelli, C.; Ciaccia, M.; Scardapane, A. Lung Segmentation and Characterization in COVID-19 Patients for Assessing Pulmonary Thromboembolism: An Approach Based on Deep Learning and Radiomics. Electronics 2021, 10, 2475. [Google Scholar] [CrossRef]
  37. Altini, N.; De Giosa, G.; Fragasso, N.; Coscia, C.; Sibilano, E.; Prencipe, B.; Hussain, S.M.; Brunetti, A.; Buongiorno, D.; Guerriero, A.; et al. Segmentation and Identification of Vertebrae in CT Scans Using CNN, k-Means Clustering and k-NN. Informatics 2021, 8, 40. [Google Scholar] [CrossRef]
  38. Isensee, F.; Petersen, J.; Klein, A.; Zimmerer, D.; Jaeger, P.; Kohl, S.; Wasserthal, J.; Koehler, G.; Norajitra, T.; Wirkert, S.; et al. nnu-net: Self-adapting framework for u-net-based medical image segmentation. arXiv 2018, arXiv:1809.10486. [Google Scholar]
  39. Antonelli, M.; Reinke, A.; Bakas, S.; Farahani, K.; Landman, B.A.; Litjens, G.; Menze, B.; Ronneberger, O.; Summers, R.M.; van Ginneken, B.; et al. The medical segmentation decathlon. arXiv 2021, arXiv:2106.05735. [Google Scholar] [CrossRef]
  40. McInerney, T.; Terzopoulos, D. Deformable models in medical image analysis: A survey. Med. Image Anal. 1996, 1, 91–108. [Google Scholar] [CrossRef]
  41. Montagnat, J.; Delingette, H.; Ayache, N. A review of deformable surfaces: Topology, geometry and deformation. Image Vis. Comput. 2001, 19, 1023–1040. [Google Scholar] [CrossRef] [Green Version]
  42. Bankman, I. Handbook of Medical Image Processing and Analysis; Elsevier: Amsterdam, The Netherlands, 2008. [Google Scholar]
  43. Besl, P.J. Geometric modeling and computer vision. Proc. IEEE 1988, 76, 936–958. [Google Scholar] [CrossRef]
  44. Campbell, R.J.; Flynn, P.J. A survey of free-form object representation and recognition techniques. Comput. Vis. Image Underst. 2001, 81, 166–210. [Google Scholar] [CrossRef] [Green Version]
  45. Tutar, I.B.; Pathak, S.D.; Gong, L.; Cho, P.S.; Wallner, K.; Kim, Y. Semiautomatic 3-D prostate segmentation from TRUS images using spherical harmonics. IEEE Trans. Med. Imaging 2006, 25, 1645–1654. [Google Scholar] [CrossRef] [PubMed]
  46. Unser, M.; Aldroubi, A.; Eden, M. B-spline signal processing. I. Theory. IEEE Trans. Signal Process. 1993, 41, 821–833. [Google Scholar] [CrossRef]
  47. Barr, A.H. Superquadrics and angle-preserving transformations. IEEE Comput. Graph. Appl. 1981, 1, 11–23. [Google Scholar] [CrossRef] [Green Version]
  48. Pentland, A.P. Perceptual organization and the representation of natural form. In Readings in Computer Vision; Elsevier: Amsterdam, The Netherlands, 1987; pp. 680–699. [Google Scholar]
  49. Barr, A.H. Global and local deformations of solid primitives. In Readings in Computer Vision; Elsevier: Amsterdam, The Netherlands, 1987; Volume 1, pp. 661–670. [Google Scholar]
  50. Solina, F.; Bajcsy, R. Recovery of parametric models from range images: The case for superquadrics with global deformations. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 131–147. [Google Scholar] [CrossRef]
  51. Pieper, S.; Halle, M.; Kikinis, R. 3D Slicer. In Proceedings of the 2004 2nd IEEE International Symposium on Biomedical Imaging: Nano to Macro (IEEE Cat No. 04EX821), Arlington, VA, USA, 15–18 April 2004; Volume 1, pp. 632–635. [Google Scholar] [CrossRef]
  52. Fedorov, A.; Khallaghi, S.; Sánchez, C.A.; Lasso, A.; Fels, S.; Tuncali, K.; Sugar, E.N.; Kapur, T.; Zhang, C.; Wells, W.; et al. Open-source image registration for MRI–TRUS fusion-guided prostate interventions. Int. J. Comput. Assist. Radiol. Surg. 2015, 10, 925–934. [Google Scholar] [CrossRef] [Green Version]
  53. Maurer, C.R.; Qi, R.; Raghavan, V. A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 265–270. [Google Scholar] [CrossRef]
  54. Horn, B.K. Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. A 1987, 4, 629–642. [Google Scholar] [CrossRef]
  55. Costa, D.N.; Pedrosa, I.; Donato, F., Jr.; Roehrborn, C.G.; Rofsky, N.M. MR imaging–transrectal US fusion for targeted prostate biopsies: Implications for diagnosis and clinical management. Radiographics 2015, 35, 696–708. [Google Scholar] [CrossRef]
Figure 1. Prostate apex (ground-truth mask in green) is not easily distinguishable from the rest of the image (red dashed box). The yellow circle represents an example of a region with low signal-to-noise ratio. The blue arrow denotes a shadow artifact.
Figure 1. Prostate apex (ground-truth mask in green) is not easily distinguishable from the rest of the image (red dashed box). The yellow circle represents an example of a region with low signal-to-noise ratio. The blue arrow denotes a shadow artifact.
Bioengineering 09 00343 g001
Figure 2. Samples of images from both modalities. (Top) Prostate MRI. From left to right, a sample image for each of the datasets—PROMISE12, SAML, and ZENODO—is shown. (Bottom) Three sample prostate TRUS from the ZENODO dataset.
Figure 2. Samples of images from both modalities. (Top) Prostate MRI. From left to right, a sample image for each of the datasets—PROMISE12, SAML, and ZENODO—is shown. (Bottom) Three sample prostate TRUS from the ZENODO dataset.
Bioengineering 09 00343 g002
Figure 3. Workflow for TRUS and MRI segmentation and subsequent image fusion. Segmentation from TRUS is achieved in a semiautomatic way by fitting a 3D model based on deformable superellipses starting from user-defined points in at least three slices. Segmentation from MRI is performed fully automatically by exploiting the nnU-Net framework. Registration can be either performed in an automatic way, or the user can add anatomical landmarks to constrain the space of transformations.
Figure 3. Workflow for TRUS and MRI segmentation and subsequent image fusion. Segmentation from TRUS is achieved in a semiautomatic way by fitting a 3D model based on deformable superellipses starting from user-defined points in at least three slices. Segmentation from MRI is performed fully automatically by exploiting the nnU-Net framework. Registration can be either performed in an automatic way, or the user can add anatomical landmarks to constrain the space of transformations.
Bioengineering 09 00343 g003
Figure 4. Deformable superellipse modeling capabilities examples. The left image represents the superellipse varying squareness ϵ parameter, whereas the middle one depicts the deformable superellipse varying tapering t parameter, and the right one portrays the deformable superellipse varying circular bending b parameter.
Figure 4. Deformable superellipse modeling capabilities examples. The left image represents the superellipse varying squareness ϵ parameter, whereas the middle one depicts the deformable superellipse varying tapering t parameter, and the right one portrays the deformable superellipse varying circular bending b parameter.
Bioengineering 09 00343 g004
Figure 5. Workflow for TRUS segmentation with the developed methodology. Physicians must annotate some points in the apex, the base, and the mid-gland of the prostate. Then, a JSON file is fed as input to an optimization routine which fits the best 2D superellipse in every annotated slice. Then, a 3D model is built by linearly interpolating 2D models. 3D Slicer has been employed as graphical interface to speed up and ease the process.
Figure 5. Workflow for TRUS segmentation with the developed methodology. Physicians must annotate some points in the apex, the base, and the mid-gland of the prostate. Then, a JSON file is fed as input to an optimization routine which fits the best 2D superellipse in every annotated slice. Then, a 3D model is built by linearly interpolating 2D models. 3D Slicer has been employed as graphical interface to speed up and ease the process.
Bioengineering 09 00343 g005
Figure 6. Original ground-truth segmentation masks are reported in the top row. Then, they are smoothed with a Gaussian filter, as depicted in the middle row. Lastly, distance maps are obtained from the smoothed masks, as shown in the bottom row.
Figure 6. Original ground-truth segmentation masks are reported in the top row. Then, they are smoothed with a Gaussian filter, as depicted in the middle row. Lastly, distance maps are obtained from the smoothed masks, as shown in the bottom row.
Bioengineering 09 00343 g006
Figure 7. The proposed registration workflow starts with preprocessing segmentation masks, to make them isotropic with the same resolution. Thereafter, SSD is employed as a metric to perform the registration, where gradient descent is adopted as the optimizer. Two kinds of initialization were considered: one based on centers and the other based on landmarks.
Figure 7. The proposed registration workflow starts with preprocessing segmentation masks, to make them isotropic with the same resolution. Thereafter, SSD is employed as a metric to perform the registration, where gradient descent is adopted as the optimizer. Two kinds of initialization were considered: one based on centers and the other based on landmarks.
Bioengineering 09 00343 g007
Figure 8. Results for prostate segmentation from MRI. Top row contains images from the SAML dataset and the second row contains images from ZENODO. The ground truth is represented in red, whereas the predictions from the nnU-Net models are colored in green. The middle image shows the prediction mask for the nnU-Net trained for only 10 epochs, whereas the right image depicts the prediction mask for the one trained on the PROMISE12 dataset.
Figure 8. Results for prostate segmentation from MRI. Top row contains images from the SAML dataset and the second row contains images from ZENODO. The ground truth is represented in red, whereas the predictions from the nnU-Net models are colored in green. The middle image shows the prediction mask for the nnU-Net trained for only 10 epochs, whereas the right image depicts the prediction mask for the one trained on the PROMISE12 dataset.
Bioengineering 09 00343 g008
Figure 9. Results for prostate segmentation from TRUS. The left image portrays the ground-truth prostate mask in red. The right image depicts the segmentation results after both the first and second iteration in green and yellow colors, respectively.
Figure 9. Results for prostate segmentation from TRUS. The left image portrays the ground-truth prostate mask in red. The right image depicts the segmentation results after both the first and second iteration in green and yellow colors, respectively.
Bioengineering 09 00343 g009
Figure 10. Workflow employed for the image-fusion procedure. Segmentation masks are obtained for both domains—TRUS and imaging. Then, the registration is performed as described in Section 2.5, so that images can be fused. Both masks are shown after the registration procedure.
Figure 10. Workflow employed for the image-fusion procedure. Segmentation masks are obtained for both domains—TRUS and imaging. Then, the registration is performed as described in Section 2.5, so that images can be fused. Both masks are shown after the registration procedure.
Bioengineering 09 00343 g010
Table 2. Quantitative metrics results for MRI prostate segmentation with nnU-Net. Performance has been measured on two validation sets.
Table 2. Quantitative metrics results for MRI prostate segmentation with nnU-Net. Performance has been measured on two validation sets.
Train SetEpochsTest SetDice [%]RVD [%]HD [mm]ASSD [mm]
PROMISE121000SAML-V88.18 ± 10.5317.58 ± 31.6121.03 ± 51.060.86 ± 1.14
PROMISE121000ZENODO91.17 ± 1.194.13 ± 8.7916.11 ± 3.560.26 ± 0.01
Table 3. Quantitative metrics results for TRUS prostate segmentation with the proposed superellipse-based approach. Results are reported for both the first and second iterations of the algorithm run.
Table 3. Quantitative metrics results for TRUS prostate segmentation with the proposed superellipse-based approach. Results are reported for both the first and second iterations of the algorithm run.
MetricsDice [%]RVD [%]HD [mm]ASSD [mm]
Case 91st87.15 ± 2.41−13.27 ± 8.3425.12 ± 5.580.53 ± 0.120
2nd88.56 ± 2.66−9.44 ± 8.8816.10 ± 7.120.38 ± 0.022
Case 101st89.31 ± 1.13−12.21 ± 3.069.25 ± 2.410.23 ± 0.020
2nd92.57 ± 0.45−4.86 ± 0.369.37 ± 2.530.17 ± 0.015
Case 121st90.76 ± 1.39−5.46 ± 3.6123.30 ± 9.580.30 ± 0.049
2nd92.47 ± 0.30−1.87 ± 1.2421.26 ± 8.220.23 ± 0.048
Table 4. Registration results in both configurations. The first utilizes the center of the images as the initializer, and the second utilizes a set of landmarks as the initializer.
Table 4. Registration results in both configurations. The first utilizes the center of the images as the initializer, and the second utilizes a set of landmarks as the initializer.
ExperimentsDice [%]Jaccard [%]RVD [%]HD [mm]
case 10—center91.7784.79−0.863.77
case 10—landmarks91.7884.80−0.873.77
case 12—center94.8290.15−5.792.12
case 12—landmarks94.8590.21−5.792.09
case 9—center93.6187.99−1.863.55
case 9—landmarks93.6087.98−1.883.60
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Altini, N.; Brunetti, A.; Napoletano, V.P.; Girardi, F.; Allegretti, E.; Hussain, S.M.; Brunetti, G.; Triggiani, V.; Bevilacqua, V.; Buongiorno, D. A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net. Bioengineering 2022, 9, 343. https://doi.org/10.3390/bioengineering9080343

AMA Style

Altini N, Brunetti A, Napoletano VP, Girardi F, Allegretti E, Hussain SM, Brunetti G, Triggiani V, Bevilacqua V, Buongiorno D. A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net. Bioengineering. 2022; 9(8):343. https://doi.org/10.3390/bioengineering9080343

Chicago/Turabian Style

Altini, Nicola, Antonio Brunetti, Valeria Pia Napoletano, Francesca Girardi, Emanuela Allegretti, Sardar Mehboob Hussain, Gioacchino Brunetti, Vito Triggiani, Vitoantonio Bevilacqua, and Domenico Buongiorno. 2022. "A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net" Bioengineering 9, no. 8: 343. https://doi.org/10.3390/bioengineering9080343

APA Style

Altini, N., Brunetti, A., Napoletano, V. P., Girardi, F., Allegretti, E., Hussain, S. M., Brunetti, G., Triggiani, V., Bevilacqua, V., & Buongiorno, D. (2022). A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net. Bioengineering, 9(8), 343. https://doi.org/10.3390/bioengineering9080343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop