Next Article in Journal
Diagnosis and Management of Barrett’s Esophagus
Previous Article in Journal
Trends in Male and Female Urethral Endoscopic Management and Urethroplasty Using the TriNetX Database
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of Proposed Software System for Prediction of Iliosacral Screw Placement for Iliosacral Joint Injuries Based on X-ray and CT Images

1
Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17. listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
2
Trauma Center, University Hospital Ostrava, 17. listopadu 1790, Poruba, 708 52 Ostrava, Czech Republic
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2023, 12(6), 2138; https://doi.org/10.3390/jcm12062138
Submission received: 4 January 2023 / Revised: 3 March 2023 / Accepted: 7 March 2023 / Published: 9 March 2023
(This article belongs to the Section Nuclear Medicine & Radiology)

Abstract

:
One of the crucial tasks for the planning of surgery of the iliosacral joint is placing an iliosacral screw with the goal of fixing broken parts of the pelvis. Tracking of proper screw trajectory is usually done in the preoperative phase by the acquisition of X-ray images under different angles, which guide the surgeons to perform surgery. This approach is standardly complicated due to the investigation of 2D X-ray images not showing spatial perspective. Therefore, in this pilot study, we propose complex software tools which are aimed at making a simulation model of reconstructed CT (DDR) images with a virtual iliosacral screw to guide the surgery process. This pilot study presents the testing for two clinical cases to reveal the initial performance and usability of this software in clinical conditions. This model is consequently used for a multiregional registration with reference intraoperative X-ray images to select the slide from the 3D dataset which best fits with reference X-ray. The proposed software solution utilizes input CT slices of the pelvis area to create a segmentation model of individual bone components. Consequently, a model of an iliosacral screw is inserted into this model. In the next step, we propose the software CT2DDR which makes DDR projections with the iliosacral screw. In the last step, we propose a multimodal registration procedure, which performs registration of a selected number of slices with reference X-ray, and based on the Structural Similarity Index (SSIM) and index of correlation, the procedure finds the best match of DDR with X-ray images. In this pilot study, we also provide a comparative analysis of the computational costs of the multimodal registration upon various numbers of DDR slices to show the complex software performance. The proposed complex model has versatile usage for modeling and surgery planning of the pelvis area in fractures of iliosacral joints.

1. Introduction

Preoperative consideration and planning are necessary before applying any osteosynthesis. It is advisable to determine the optimal method of fracture repositioning, to determine the type of osteosynthesis and also to select the optimal osteosynthetic material for the specific case of injury. Nowadays, there are several computer software tools that allow for virtual fracture repositioning and also virtual osteosynthesis [1,2]. These software tools usually operate with CT data of the affected skeletal area. Once the CT data is loaded into the preoperative planning software, it is possible to separate the individual fracture fragments. Some software tools can separate larger fragments automatically; correction by the program user is necessary to refine the boundaries of the fragments. In most cases, point marking of the fragment margins is necessary, and the software then allows for the margins to be drawn more accurately than with automatic separation. After the separation of individual fragments, which are distinguished by color, their repositioning is possible. Repositioning can be done manually again (using the mouse), where most of the software allows for moving the fragments in different axes and planes. Some software allows for automatic repositioning using mirroring. These alternatives require a CT scan of the healthy part of the skeleton—the other side of the pelvis, the other limb. After repositioning, it is possible to insert virtual osteosynthetic material plates, screws, nails, etc. However, most of the software is tied to a specific manufacturer of osteosynthetic material, which will supply the software with the necessary shape and dimensions of the individual components. The software also allows for the optimal shaping of osteosynthetic material-especially plates. The virtual plate thus shaped can be removed from the software and sent to the manufacturer to produce an individually shaped implant. The optimal dimensions of individual plates, screws or nails can be determined in the software. The planning of the implant placement is also possible in the computer navigation software preoperatively, but the time for this task is limited as it may increase the total operating time.
In cooperation with the Technical University in Ostrava, Czech Republic and Trauma Center at the University Hospital in Ostrava, software tools were developed that would link preoperatively planned implant placement based on CT data with preoperative fluoroscopic X-ray projections. After fracture repositioning and implant planning, this proposed software would be used to fit the X-ray fluoroscopic projections to the prepared preoperative model. The proposed software tools would then allow the planned implant to be implanted into the fluoroscopic projections. During surgery, it would be possible to select the optimal skeletal entry for the insertion of guide wires or canal drilling and also to compare the direction of drilling with the optimal direction determined by the position of the planned implant. At present, these tasks are performed on the basis of the operator’s preoperative reasoning as well as his anatomical knowledge and imagination during the surgical procedure. The main contributions of this study include the following:
  • Multiregional 3D segmentation model of pelvis area from CT images
  • Reconstructed DDR projections with virtual iliosacral screw
  • Multimodal (X-ray/CT) image registration for optimal CT slice selection according to the reference X-ray image.
The rest of the paper is organized as follows. In Section 2, we describe a recent state-of-the-art CAOS system in orthopedic surgery. Section 3 is focused on the description of the proposed system, and Section 4 aims to present the achieved results of the proposed system on two cases of iliosacral joint injury. Section 5 presents a discussion and future perspectives in this research.

2. Recent Work

Over the last decade, computer-assisted orthopedic surgery (CAOS) has become a part of orthopedic surgery, allowing surgeons to perform surgical operations with better accuracy and results, thus improving the patient’s well-being [3]. Finding its application first in orthopedic spinal surgery [4,5], CAOS has progressed into other procedures involving the musculoskeletal system, ranging from total hip or knee arthroplasty (THA and TKA, respectively) to bone tumor surgery [3,6,7]. CAOS navigation and planning are implemented using various methods and their combination, of which the major ones are CT-based imaging, intraoperative fluoroscopy-based imaging, and imageless navigation systems [3,7,8,9].
Sacroiliac joint dislocations and sacral fractures present a challenging treatment due to the proximity of neurovascular structures. The optimal treatment is done with percutaneous iliosacral screw (ISS) insertions. Computer-assisted navigation has the potential to not only reduce malposition rate and nerve and vessel injuries but also decrease operation time and radiation exposure [10]. As such, precise anatomical data of the patient’s pelvis and sacral bone are required. Virtual planning of pelvic and sacral bone operations on a patient-specific 3D model of the hip allows surgeons to compare different operation strategies.
CT-guided iliosacral fixation offers direct visualization of the screws, thus reducing the malposition of the screws and radiation dose. This procedure provides a precise geometry of fracture fragments and anatomical structures, allowing an accurate insertion [11,12,13,14].
A combination of 3D fluoroscopic navigation and computed tomography (CT-3D-fluoroscopy) was applied by inserting an iliosacral screw on three types of simulated posterior pelvic ring disruptions. Despite the small patient sample size, CT-3D-fluoroscopy proved to be successful in assisting iliosacral joint surgery with no or little damage to surrounding soft tissue [13,14,15].
CAOS procedures combined with 3D printing [16] of patient-specific pelvic bone present a potential guide template to treat iliosacral joint dislocation. The model is first reconstructed from CT images in an appropriate software (Mimics and similar), and then 3D printed with photosensitive resin. The guide template had a reduction in fluoroscopy time and screw insertion time with low blood loss and postoperative recovery [17].
CAOS has also undertaken a small part in virtual reality (VR) training simulations providing the ability to train on anatomically accurate 3D hip models for orthopedic surgeries, particularly aimed at medical residents. Compared to conventional training, simulations have the potential to create a risk-free environment to improve the personnel’s skills without any harm to patients. The main limitations of training simulators are feedback to the trainee, availability, and financial cost of such simulators [18,19].
A notable proposal is to use an arc screw for the internal fixation of a pelvic fracture through an internal arc fixation channel (IAFC) located inside the pelvis [20,21,22]. An automatic planning algorithm for pelvic fractures is introduced to determine an optimal channel for the arc screw. Utilizing a finite element analysis (FEA) on pelvic 3D models, a precise stress simulation determined the force applied to the pelvis in three types of postures. Based on the FEA results, the planning algorithm then locates the position, length and curvature of the arc screw for pelvic ring fracture fixation, which is patient-dependent. The current implementation uses a robot-navigated drilling system capable of drilling constant curvature in a Sawbone model of the pelvis. Due to the complexity of the pelvic structure and the surrounding soft tissue, the feasibility of IAFC is still subject to change [20,21,22].

3. Materials and Methods

In this section, we present the realization and workflow of positioning the guide for an iliosacral screw. The proposed scheme from this study is aimed at the generation of a segmentation model in the system Mimics, reflecting individual bones in the pelvis area. Consequently, the proposed methodology generates reconstructed radiograph DDR projections from the system Mimics with Iliosacral Screw. Lastly, we propose a multimodal registration scheme, which is aimed at finding the CT slice which is the best fit for a given X-ray image for orientation in the 3D CT plane. The general workflow consists of the three mentioned phases, as shown in Figure 1:
  • Generation of 3D models of the pelvis
  • Generation of digitally reconstructed radiograph (DRR) projections
  • Multimodal image registration of DRR projections to a reference X-ray image

3.1. Generation of 3D Models of the Pelvis

An image processing software Materialise Mimics generates 3D models from computed tomography (CT) image data and positions the guide 3D data optimization software. Materialise Mimics and Materialise 3-matic were used, both developed by a Belgium company Materialise NV. These software tools offer the user a wide range of tools to extract from and edit anatomical structures in medical images, generate 3D mesh models and compute various analyses and measurements. In order to create accurate 3D models and place the guide, voxels representing the pelvis are first extracted using a thresholding-based segmentation method, which compares each voxel’s Hounsfield unit (HU) value to a certain range with minimum (T1) and maximum (T2) threshold values, as shown in (1). The output is a 3D binary mask, represented as a 3D matrix, which is used with the original CT images to compute a 3D mask composed of voxels that fall within the defined range of Hounsfield units.
I 2 x , y = 0                                     I 1 x , y > T 2 1               T 1 > I 1 x , y T 2 0                                   I 1 x , y T 1
where I1 and I2 are the input and output 3D matrices consisting of CT slices. This method extracts the desired bone tissues and removes surrounding soft tissue, like muscles and skin. However, it will occasionally extract voxels that represent other anatomical structures, medical objects or scattering artifacts caused by already present metal objects whose Hounsfield unit is similar to those of bone tissues. The region-growing segmentation method is then used on each CT slice to remove these undesirable artifacts and objects, thus increasing the quality of the following 3D reconstruction. It is an iterative algorithm that grows a region of voxels depending on their seed values Si(x,y):
I S i x , y I x , y T
If the result of the absolute value is greater than a threshold value T, the voxel is added to the output region. The growth is also dependent on the connectivity between voxels, where a 6-connectivity option checks neighboring faces of the selected voxel and those which are connected to it. The last operation consists of splitting the whole pelvis mask into individual anatomical structures: lumbar vertebrae, sacral bone, left and right ilium and femurs. The process of the segmentation and decomposition of the pelvis area from CT slices is presented in Figure 2.
The purpose of splitting the mask is to give the option of hiding individual bones to accurately position the guide. From the resulting mask, we use Mimics’ marching cubes algorithm to generate a 3D mesh object of the pelvic bones. The mesh object is then exported into 3-matic (Figure 3), where the guide is positioned depending on the patient’s injured iliosacral region. Once the guide’s 3D mesh model is placed, it is exported into Mimics in an STL format, which retains information about its location and rotation in 3D world space. A new mask of the guide is then generated and fused with CT slices. Lastly, new DICOM files in the form of CT slices are generated, containing both anatomical structures and the positioned guide (Figure 3).

3.2. DDR Projection Generation (CT2DDR)

Digitally reconstructed radiography (DRR) projections simulate a conventional 2D X-ray image based on CT imaging data. The projections are generated with a maximum intensity projection technique that selects the highest Hounsfield unit of each CT slice. The following section describes the process of generating DRR projections (Figure 4), consisting of two steps: image rotation and maximum intensity projection. The resulting DRR projections are then used to register a reference X-ray image. For the purpose of acquisition of DDR projections, we have developed the SW CT2DDR, which calculates individual DDR projections.
An interactive application has been created to import DICOM files, which store CT data in the form of transversal slices. First, a three-dimensional matrix consisting of ordered CT slices as layers is rotated at a specific angle θ around the z-axis. The rotation is performed using a rotation matrix Rz:
R z ϑ = cos ϑ sin ϑ 0 sin ϑ cos ϑ 0 0 0 1
which is a form of image transformation that changes the location of each pixel. Second, maximum intensity projection locates the highest pixel intensity value, as described in (4), and stores it in a two-dimensional matrix, where the output matrix’s number of rows and columns correspond to the input matrix’s number of layers and columns, respectively.
I x , y = m a x M x ,   y , z
where M is the input three-dimensional matrix, and I is the output two-dimensional matrix. The generated DRR projections are then exported as graphics files, as shown in Figure 5.

3.3. Image Histogram Pre-Processing

Image histogram matching is a low-level image processing transformation that aims to normalize the histogram of an input image to that of a reference image, thus changing the distribution of pixel intensities [23]. Histogram matching is used as a potential pre-processing step to increase the precision and decrease the computation time of image registration.
The algorithm is described as computing the histogram pr(r) of an input image and using it to its pixel values to the values in the histogram equalized image in the range k = [0, L − 1]:
s k = L 1 j = 0 k p r r j
where L is the maximum pixel value in the input image based on its bit depth and then computes all values of a transformation function G(zq) in the same range as k, so that G(zq) = sk:
G z q = L 1 i = 0 q p z z i
and obtain values zq from the inverse transformation of G:
z q = G 1 s k
The result is a histogram-matched image mapped from equalized pixel values sk to the corresponding values zq.

3.4. Image Registration Model

Image registration is one of many tasks of medical image analysis, which deals with collecting, processing, and evaluating medical images acquired from imaging techniques, most prominently magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and single photon emission computed tomography (SPECT), but also conventional radiography (CR) and medical ultrasonography (US). It is a process of transforming images or 3D volumes from the same (monomodal) or multiple modalities (multimodal) into a single coordinate space so that the data can be accurately compared and studied. Its task is to reduce inevitable misalignment, which manifests in pre, intra and postoperative image acquisition. Registration is realized by mapping the source images to target images, also called sensed images and reference images, respectively. Monomodal registration deals with images taken from the same modality by the same scanner: CT-CT or MRI-MRI. Multimodal registration, on the hand, deals with images taken from different modalities and different scanners: CR-CT, CT-MRI, or CT-PET [24,25,26].
In this paper, we introduce the use of multimodal image registration to preoperatively plan screw insertions in the iliosacral region. The first section describes the registration algorithm used to register source CT image data to target CR images. The second section describes similarity metrics used to assess the precision of registration.

3.4.1. Multimodal image registration algorithm

Image registration algorithms [26] can be defined by three components. First is a cost function that describes the dissimilarity between two images and is formed by various regularization terms, such as fluid, diffusion and elastic. Second is a space of geometric transformations, which allows the images to deform. These are rigid for translation, rotation and scaling and affine (non-rigid) for additional warping. The third component is a strategy for minimizing the cost function. This paper focuses on the implementation of Thirion’s [27,28,29,30] Demons algorithm of image matching as a diffusion process. It is an optical flow-based affine registration method that computes the demon forces according to the local characteristics of the images. Gaussian smoothing filter with a given σ is then used as a regularization term for each iteration until convergence. The process is based on the following optical flow equation, which describes the displacement v :
v = m s · s s 2 + m s 2
where m and s are intensity functions of the source image M and the target image S, respectively, at a certain point and s is a gradient of the source image S.
Considering a source image M and a target image S, the algorithm aims to find a final transform T that belongs to a set of allowed deformations T between the space M of the source image M and the space S of the target image S. In each iteration, the deformed Ti(M) of the source M becomes Ti+1(M), constrained internal forces fint and external forces fext, created by the interactions between Ti(M) and S. This process is described in a block diagram in Figure 6.
The first step consists of precomputation of the set of demon forces Ds, which are extracted from the target image S, where one pixel (voxel for 3D images) corresponds to one demon force. The second step is an iterative estimation of the deformation of the source image T, from the source space M to the target space S. The demon force can be described by its spatial position P or intensity at that location s(P) or a direction from the inside to the outside based on the gradient. Figure 7 presents the example of using the multimodal registration between the target (X-ray) image and moving (DDR projections).

3.4.2. Statistical Metrics for Registration Evaluation

Structural similarity index [31] or SSIM index is a metric for the objective evaluation of two images containing the same overall structures, which are defined by their contrast, shape and luminance. This metric is defined as:
S S I M x , y = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 σ y 2 C 2
where μ is the weighted average of images x and y, and σ is the covariance of x and y. Parameter Ci = (Ki, L)2, where L is the dynamic range of pixel values (2n-bits/pixel – 1) and K1 ≪ 1 and K2 ≪ 1 are scalar constants for C1 and C2. The resulting index value lies within the range of [0, 1], where SSIM values at 0 indicate the lowest similarity and values at 1 indicate the highest similarity of input images x and y.
A correlation coefficient is a metric that computes the linear correlation between two images, x and y, defined as the ratio of the sum of multiplied differences and squared root of multiplied sums of squared differences:
C O R R x , y = x i x ¯ · y i y ¯ x i x ¯ 2 · y i y ¯ 2
where xi and yi are pixel intensity values, x ¯ and y ¯ are the arithmetic mean of each image. The resulting value lies within the range of −1 and +1, where CORR below 0 indicates a lower linear correlation and above 0 indicates a higher linear correlation between images x and y.
The resulting registered DRR projections are evaluated in terms of similarity to their respective reference X-ray image. The higher the evaluation indexes, the better the registration algorithm deformed DRR projections to their reference X-ray image.

4. Results

This section is focused on the results of image registration with the use of the Demons algorithm to match input DRR projection images to reference X-ray images. Computed tomography and X-ray datasets of two patients have been used in this pilot study. The patient’s records were used in this study under the approval of the Ethics Committee of the University Hospital in Ostrava, Czech Republic, with reference number: 1030/2022. Table 1 and Table 2 describe the technical parameters of each image dataset per patient.
The registration process follows the steps described in Figure 1. A corresponding range of DRR projections is chosen, depending on the position and rotation of the patient’s pelvis on the X-ray image. This range of projections considers any deviations in the patient’s position and rotation which are likely to occur during X-ray and CT scans. A smaller range of projections, 10° for example, is preferred to reduce the computation time of registration. The output registration images are then evaluated using similarity metrics described in statistical evaluation metrics to obtain the projection with the highest similarity to its reference image, as seen in Figure 8 and Figure 9, where the registered and its input image have the highest similarity.
Figure 10 and Figure 11 show the summary of similarity metrics per a normalized rotation angle of a registered and an original DRR projection for two patients, where angle 0° signifies the base DRR projection, which corresponds to the general rotation of the patient’s pelvis on the reference X-ray image, 345° is the lower limit, and 15° is the upper limit of a 30° angle range. After normalization, the range spans from −15° to 15°, with 0° being the base DRR projection. A full summary of the highest similarity results and their projection angle per patient can be found in Table 3. Due to the difference in computing the structural similarity index and correlation coefficient, during which the correlation coefficient computes only the mutual relationship of pixel values of two images, the resulting registered DRR projection with the highest similarity to the reference image may differ. This can be observed in Figure 10 of the results for Patient 1, where the best-matched projection angle based on structural similarity is 2°, unlike the correlation coefficient, which is 8°. The structural similarity index and correlation coefficient in Figure 11 of patient 2 results correspond. The difference in the highest results is mainly dependent on pixel intensity distributions of DRR projection images and X-ray images and differs across the tested patient datasets. Therefore, we indicate the best-registered projection angle for both metrics based on the highest value of the two metrics and the nearest projection angle to the base projection angle, which must correspond to the rotation on the patient’s X-ray image.

Computation Time

Overall computation time is significantly dependent on the range of DRR projections to be registered, the size and pixel intensity distribution of input and reference images and the computer hardware used. We used two computer systems to compare this. The first computer, designated as PC 1, consists of the following components: 6th generation quad-core Intel Core i5-6400 processor with a frequency of 2.7 GHz, 16 GB of available memory, Nvidia GTX 1060 6 GB graphics card and Windows 10 operating system. The second system, PC 2, consists of a 10th generation 8-core Intel Core i5-10300H processor with a frequency of 2.5 GHz, 16 GB of available memory, Intel UHD Integrated and Nvidia RTX 2060 6 GB graphics cards and Windows 10 operating system.
On PC 1, the registration time lies in the range of 7.65 and 9.22 s, with a mean of 7.74 s per DRR projection for Patient 1. On PC 2, the average time had been reduced to 6.36 s per DRR projection in an overall range of 5.64 and 7.39 s. A summary of computation times for both computer systems per patient is shown in Table 4.
Figure 12 shows the computation times of registration of 5 ranges of DRR projections, 1°, 10°, 20°, 30°, and 360°, for two patients on PC 1. A full summary of computation times per these ranges across all patients and both computer systems can be found in Table 5.
As described in the pre-processing section, image histogram matching has been used to potentially improve the overall registration in terms of precision and computation time. This was tested on two datasets on PC 1. The summary of the results can be seen in Table 5.
Despite the input DRR projections having a more uniform pixel value distribution to its reference X-ray image, we found minor improvements of similarities of Patient 1 at the cost of a 0.3 s increase in mean computation time. For Patient 2, however, there was a decrease in similarities with lower mean computation time, 0.18 s difference. On both occasions, the best-registered projection resulted in the angle range limits, at 15° for Patient 2 and at 345° (normalized—15°) for Patient 2, which greatly differ from those without histogram matching. Figure 13 shows the difference in the pixel intensity distribution of both datasets. Reference X-ray images are the same as in Figure 8 and Figure 9, respectively.

5. Discussion and Conclusions

In this paper, we describe a novel technical solution for the simulation of planning of iliosacral screw placement for iliosacral joint injuries. This system is able to virtually simulate placing an iliosacral screw into preoperative CT 3D scans in the form of DDR projections and classify respective CT scans with the screw according to the intraoperative X-ray images. This complex procedure enables surgeons to investigate the operation area with a virtual screw anytime during the surgery and guide them to optimal lead the iliosacral screw to avoid other surrounding tissues.
The proposed complex software solution enables the modeling of the pelvis area as a multiregional segmentation procedure (Figure 2) using the Region growing method. This procedure consequently allows for the virtual placing of the iliosacral screw (Figure 3), which is consequently used for the generation of DDR projections. These projections represent a 3D model of the investigated pelvis area for spatial manipulation with the screw in 3D space instead of 2D X-ray images. To find the most suitable 3D slice which best corresponds with the reference X-ray image, we incorporated a multimodal registration procedure, which is aimed at finding the 3D DDR slice which best corresponds with a reference X-ray image (Figure 7).
This is a versatile procedure, which enables surgeons to make a 3D model of the pelvis area and virtually place an iliosacral screw in the 3D space, with the consequent generation of DDR projections, for which we created in SW MATLAB the application CT2DDR (Figure 4). This procedure can be versatile and applied for any X-ray image with using of the proposed multimodal (X-ray/CT) registration, which is aimed at the selection of the 3D slice which best corresponds with the reference X-ray image. This complex procedure provides an effective transformation of 2D X-ray views, which are taken during surgery, into 3D space for better spatial orientation and planning of surgery within the preoperative phase. We built the proposed registration system based on the SSIM and correlation index, where these parameters are capable of selecting the best 3D slices, which are the most similar to a reference X-ray image (Figure 10 and Figure 11).
In our study, we compared the native intraoperative X-ray images with selected reconstructed DDR slices with the iliosacral screw, where we pointed out the differences, which show the multimodal registration performance (Figure 8 and Figure 9). On the other hand, it is clinically important to have a comparison between the original CT slices and reconstructed DDR projections with the iliosacral screw to investigate whether the screw is safely led out of the surrounding critical tissues, as we report in Figure 14.
An important aspect of the registration procedure is computing time. Here, we publish the comparative analysis of computing costs for the various number of 3D projections (Figure 12), which shows that the number of projections is crucial for speeding up the whole registration process. This is one of the limitations of a multimodal registration procedure. For this reason, it is beneficial to select a narrower range of 3D slices for the registration to save computational costs.
The presented study represents a pilot study, which brings complex proposed software tools for planning iliosacral screw placement for iliosacral joint injuries. This pilot study serves for initial testing on two clinical cases, as we report in this paper, to justify the initial performance and usability of this system in the clinical practice of traumatology. Despite the limitation of the number of patients in this study, we achieved favorable results, which predetermine the potential of using this system in clinical conditions.
At the present time, the proposed software system is being used for the iliosacral joint fracture reposition surgery at University Hospital in Ostrava. The system brings significant benefits as the investigation of iliosacral screw in 3D space and automatic settings of a view with the screw according to the reference X-ray image, which is standardly done by C-arm during the surgery. On the other hand, we are aware of the limitations, which will be the focus of the future improvement of this system. Despite having this planning system with augmented reality in the form of an iliosacral screw, it will be beneficial to incorporate an attention-based system, which will predict the potential damage of surrounding tissues by leading the screw. Here, we plan to implement segmentation of surrounding tissues and check whether the screw is intersected with any of these segmented tissues. As a part of this procedure, we plan to implement the screw trajectory optimization technique, which should be able to predict the best way to lead the screw with minimal damage to surrounding tissues.

Author Contributions

Conceptualization, V.B. and J.K.; methodology, J.K. and V.B.; software, V.B., J.K., D.O. and K.D.; validation, D.O., J.K., R.M. and M.C.; formal analysis, M.C. and R.M.; investigation, V.B., J.K. and D.O.; resources, M.C.; data curation, R.M.; writing—original draft preparation, V.B., J.K., R.M. and D.O.; writing—review and editing, M.C.; visualization, V.B.; supervision, R.M. and M.C.; project administration, M.C.; funding acquisition, M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by project No. CZ.02.1.01/0.0/0.0/17 049/0008441, Innovative Therapeutic Methods of Musculoskeletal System in Accident Surgery within the Operational Programme Research, Development and Education financed by the European Union and by the state budget of the Czech Republic. The work and the contributions were supported by the project SV4502261/SP2022/98 ‘Biomedical Engineering systems XVIII’.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University Hospital in Ostrava, Czech Republic, with reference number 1030/2022.

Informed Consent Statement

This study operates only with native X-ray and CT images, which had been completely anonymized before starting this study. Thus, we only use native images without any patient records.

Data Availability Statement

The CT and X-ray images and software CT2DDR from this research will be available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Birkfellner, W.; Burgstaller, W.; Wirth, J.; Baumann, B.; Jacob, A.L.; Bieri, K.; Traud, S.; Strub, M.; Regazzoni, P.; Messmer, P. LORENZ: A system for planning long-bone fracture reduction. In Medical Imaging 2003: Visualization, Image-Guided Procedures, and Display; SPIE—The International Society for Optical Engineering: Bellingham, WA, USA, 2003; Volume 5029, pp. 500–503. [Google Scholar] [CrossRef]
  2. Jiménez-Delgado, J.J.; Paulano-Godino, F.; PulidoRam-Ramírez, R.; Jiménez-Pérez, J.R. Computer assisted preoperative planning of bone fracture reduction: Simulation techniques and new trends. Med. Image Anal. 2016, 30, 30–45. [Google Scholar] [CrossRef] [PubMed]
  3. Rambani, R.; Varghese, M. Computer assisted navigation in orthopaedics and trauma surgery. Orthop. Trauma 2014, 28, 50–57. [Google Scholar] [CrossRef]
  4. Nolte, L.-P.; Zamorano, L.; Visarius, H.; Berlemann, U.; Langlotz, F.; Arm, E.; Schwarzenbach, O. Clinical evaluation of a system for precision enhancement in spine surgery. Clin. Biomech. 1995, 10, 293–303. [Google Scholar] [CrossRef]
  5. Merloz, P.; Tonetti, J.; Eid, A.; Faure, C.; Lavallee, S.; Troccaz, J.; Sautot, P.; Hamadeh, A.; Cinquin, P. Computer Assisted Spine Surgery. Clin. Orthop. Relat. Res. 1997, 337, 86–96. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Hernandez, D.; Garimella, R.; Eltorai, A.E.M.; Daniels, A.H. Computer-assisted Orthopaedic Surgery. Orthop. Surg. 2017, 9, 152–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Chang, J.-D.; Kim, I.-S.; Bhardwaj, A.M.; Badami, R.N. The Evolution of Computer-Assisted Total Hip Arthroplasty and Relevant Applications. Hip Pelvis 2017, 29, 1–14. [Google Scholar] [CrossRef]
  8. Picard, F.; Deakin, A.H.; Riches, P.E.; Deep, K.; Baines, J. Computer assisted orthopaedic surgery: Past, present and future. Med. Eng. Phys. 2019, 72, 55–65. [Google Scholar] [CrossRef] [PubMed]
  9. Zheng, G.; Nolte, L.-P. Computer-Aided Orthopaedic Surgery: State-of-the-Art and Future Perspectives. Adv. Exp. Med. Biol. 2018, 1093, 1–20. [Google Scholar] [CrossRef]
  10. Pan, W.-B.; Liang, J.-B.; Wang, B.; Chen, G.-F.; Hong, H.-X.; Li, Q.-Y.; Chen, H.-X. The invention of an iliosacral screw fixation guide and its preliminary clinical application. Orthop. Surg. 2012, 4, 55–59. [Google Scholar] [CrossRef]
  11. Gandhi, G.; Vijayvargiya, M.; Shetty, V.; Agashe, V.; Maheshwari, S.; Monteiro, J. CT-guided percutaneous sacroiliac stabilization in unstable pelvic fractures: A safe and accurate technique. Rev. Bras. Ortop. 2018, 53, 323–331. [Google Scholar] [CrossRef]
  12. Richter, P.; Gebhard, F.; Dehner, C.; Scola, A. Accuracy of computer-assisted iliosacral screw placement using a hybrid operating room. Injury 2016, 47, 402–407. [Google Scholar] [CrossRef] [PubMed]
  13. Tonetti, J.; Boudissa, M.; Kerschbaumer, G.; Seurat, O. Role of 3D intraoperative imaging in orthopedic and trauma surgery. Orthop. Traumatol. Surg. Res. 2020, 106, S19–S25. [Google Scholar] [CrossRef]
  14. Xu, P.; Wang, H.; Liu, Z.-Y.; Mu, W.-D.; Xu, S.-H.; Wang, L.-B.; Chen, C.; Cavanaugh, J.M. An evaluation of three-dimensional image–guided technologies in percutaneous pelvic and acetabular lag screw placement. J. Surg. Res. 2013, 185, 338–346. [Google Scholar] [CrossRef]
  15. Takao, M.; Nishii, T.; Sakai, T.; Yoshikawa, H.; Sugano, N. Iliosacral screw insertion using CT-3D-fluoroscopy matching navigation. Injury 2014, 45, 988–994. [Google Scholar] [CrossRef] [PubMed]
  16. Guo, F.; Dai, J.; Zhang, J.; Ma, Y.; Zhu, G.; Shen, J.; Niu, G. Individualized 3D printing navigation template for pedicle screw fixation in upper cervical spine. PLoS ONE 2017, 12, e0171509. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, F.; Yu, J.; Yang, H.; Cai, L.; Chen, L.; Lei, Q.; Lei, P. Iliosacral screw fixation of pelvic ring disruption with tridimensional patient-specific template guidance. Orthop. Traumatol. Surg. Res. 2022, 108, 103210. [Google Scholar] [CrossRef] [PubMed]
  18. Vaughan, N.; Dubey, V.N.; Wainwright, T.W.; Middleton, R.G. A review of virtual reality based training simulators for orthopaedic surgery. Med. Eng. Phys. 2016, 38, 59–71. [Google Scholar] [CrossRef] [Green Version]
  19. Tonetti, J.; Vadcard, L.; Girard, P.; Dubois, M.; Merloz, P.; Troccaz, J. Assessment of a percutaneous iliosacral screw insertion simulator. Orthop. Traumatol. Surg. Res. 2009, 95, 471–477. [Google Scholar] [CrossRef]
  20. Yang, Q.; Feng, S.; Song, J.; Cheng, C.; Liang, C.; Wang, Y. Computer-aided automatic planning and biomechanical analysis of a novel arc screw for pelvic fracture internal fixation. Comput. Methods Programs Biomed. 2022, 220, 106810. [Google Scholar] [CrossRef]
  21. Zakariaee, R.; Schlosser, C.L.; Baker, D.R.; Meek, R.N.; Coope, R.J. A feasibility study of pelvic morphology for curved implants. Injury 2016, 47, 2195–2202. [Google Scholar] [CrossRef]
  22. Alambeigi, F.; Wang, Y.; Sefati, S.; Gao, C.; Murphy, R.J.; Iordachita, I.; Taylor, R.H.; Khanuja, H.; Armand, M. A Curved-Drilling Approach in Core Decompression of the Femoral Head Osteonecrosis Using a Continuum Manipulator. IEEE Robot. Autom. Lett. 2017, 2, 1480–1487. [Google Scholar] [CrossRef]
  23. Qi, J.; Hu, Y.; Yang, Z.; Dong, Y.; Zhang, X.; Hou, G.; Lv, Y.; Guo, Y.; Zhou, F.; Liu, B.; et al. Incidence, Risk Factors, and Outcomes of Symptomatic Bone Cement Displacement following Percutaneous Kyphoplasty for Osteoporotic Vertebral Compression Fracture: A Single Center Study. J. Clin. Med. 2022, 11, 7530. [Google Scholar] [CrossRef] [PubMed]
  24. Long, Y.; Wang, T.; Xu, X.; Ran, G.; Zhang, H.; Dong, Q.; Zhang, Q.; Guo, J.; Hou, Z. Risk Factors and Outcomes of Extended Length of Stay in Older Adults with Intertrochanteric Fracture Surgery: A Retrospective Cohort Study of 2132 Patients. J. Clin. Med. 2022, 11, 7366. [Google Scholar] [CrossRef] [PubMed]
  25. Bashiri, F.S.; Baghaie, A.; Rostami, R.; Yu, Z.; D’Souza, R.M. Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach. J. Imaging 2018, 5, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Gao, X.; Du, J.; Zhang, Y.; Gong, Y.; Zhang, B.; Qu, Z.; Hao, D.; He, B.; Yan, L. Predictive Factors for Bone Cement Displacement following Percutaneous Vertebral Augmentation in Kümmell’s Disease. J. Clin. Med. 2022, 11, 7479. [Google Scholar] [CrossRef] [PubMed]
  27. Cahill, N.; Noble, J.A.; Hawkes, D.J.; Noble, A. A Demons Algorithm for Image Registration with Locally Adaptive Regularization. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 574–581. [Google Scholar] [CrossRef]
  28. Thirion, J.-P. Image matching as a diffusion process: An analogy with Maxwell′s demons. Med. Image Anal. 1998, 2, 243–260. [Google Scholar] [CrossRef] [Green Version]
  29. Abidi, A.I.; Singh, S. Deformable Registration Techniques for Thoracic CT Images: An Insight into Medical Image Registration; Springer: Singapore, 2020. [Google Scholar] [CrossRef]
  30. El-Baz, A.S.; Acharya, U.R.; Laine, A.F.; Suri, J.S. (Eds.) Multi Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies: Volume II; Springer: New York, NY, USA, 2011. [Google Scholar] [CrossRef]
  31. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volune 2, pp. 1398–1402. [Google Scholar] [CrossRef] [Green Version]
Figure 1. General workflow of 3D generation to image registration.
Figure 1. General workflow of 3D generation to image registration.
Jcm 12 02138 g001
Figure 2. Process of segmenting and separating pelvis from CT slices into individual anatomical structures. Left to right: thresholding output (green color), region growing output (yellow color) and mask splitting output (multiple colors). The top row is a representation of masks in 3D space, and the bottom row is in 2D space.
Figure 2. Process of segmenting and separating pelvis from CT slices into individual anatomical structures. Left to right: thresholding output (green color), region growing output (yellow color) and mask splitting output (multiple colors). The top row is a representation of masks in 3D space, and the bottom row is in 2D space.
Jcm 12 02138 g002
Figure 3. (a) placement of the guide in 3-matic, (b) maximum intensity projection (MIP) of 30 CT slices displaying the location of the guide and its insertion depth.
Figure 3. (a) placement of the guide in 3-matic, (b) maximum intensity projection (MIP) of 30 CT slices displaying the location of the guide and its insertion depth.
Jcm 12 02138 g003
Figure 4. Block diagram of the process of generation DRR projections and proposed software for generating DDR projections.
Figure 4. Block diagram of the process of generation DRR projections and proposed software for generating DDR projections.
Jcm 12 02138 g004
Figure 5. Generated series of DRR projections with rotations per 45°.
Figure 5. Generated series of DRR projections with rotations per 45°.
Jcm 12 02138 g005
Figure 6. Block diagram of the multimodal registration algorithm for matching X-ray and CT images.
Figure 6. Block diagram of the multimodal registration algorithm for matching X-ray and CT images.
Jcm 12 02138 g006
Figure 7. Result of Demons algorithm for multimodal image registration. (Left, Middle, Right): Target (reference) X-ray image, moving (input) DRR projection image, the output of image registration.
Figure 7. Result of Demons algorithm for multimodal image registration. (Left, Middle, Right): Target (reference) X-ray image, moving (input) DRR projection image, the output of image registration.
Jcm 12 02138 g007
Figure 8. The output of image registration for Patient 1. Left to Right: Reference X-ray image, input DRR projection, output DRR projection, alpha blending composite image and RGB composite image.
Figure 8. The output of image registration for Patient 1. Left to Right: Reference X-ray image, input DRR projection, output DRR projection, alpha blending composite image and RGB composite image.
Jcm 12 02138 g008
Figure 9. The output of image registration for Patient 2. Left to Right: Reference X-ray image, input DRR projection, output DRR projection, alpha blending composite image and RGB composite image.
Figure 9. The output of image registration for Patient 2. Left to Right: Reference X-ray image, input DRR projection, output DRR projection, alpha blending composite image and RGB composite image.
Jcm 12 02138 g009
Figure 10. Similarity metrics of image registration of 30° angle range for Patient 1. DRR projection with the highest similarity is marked as green.
Figure 10. Similarity metrics of image registration of 30° angle range for Patient 1. DRR projection with the highest similarity is marked as green.
Jcm 12 02138 g010
Figure 11. Similarity metrics of image registration of 30° angel range for Patient C. DRR projection with the highest similarity marked as green.
Figure 11. Similarity metrics of image registration of 30° angel range for Patient C. DRR projection with the highest similarity marked as green.
Jcm 12 02138 g011
Figure 12. Computation times in minutes of image registration per range of rotation angles of DRR projections for two patient datasets. Acquired on PC 1.
Figure 12. Computation times in minutes of image registration per range of rotation angles of DRR projections for two patient datasets. Acquired on PC 1.
Jcm 12 02138 g012
Figure 13. Best registered DRR projections with and without histogram matching. (Top Row) is Patient 1, (Bottom Row) is Patient 2.
Figure 13. Best registered DRR projections with and without histogram matching. (Top Row) is Patient 1, (Bottom Row) is Patient 2.
Jcm 12 02138 g013
Figure 14. Comparison of three various CT slices (Top Row) of patient 2 with the same DDR projections with virtual iliosacral screw (Bottom Row).
Figure 14. Comparison of three various CT slices (Top Row) of patient 2 with the same DDR projections with virtual iliosacral screw (Bottom Row).
Jcm 12 02138 g014
Table 1. Technical and metadata information of X-ray image datasets per patient.
Table 1. Technical and metadata information of X-ray image datasets per patient.
Imaging DeviceModalityBit-DepthImage Resolution [Pixels]Pixel Spacing [mm]View PositionBody Part Examined
Patient 1Kodak Elite CRCR162048 × 25000.17/0.17APPelvis
Patient 2Samsung GC85DX162994 × 29900.13/0.13APPelvis
Table 2. Technical and metadata information of computed tomography image datasets per patient.
Table 2. Technical and metadata information of computed tomography image datasets per patient.
Imaging DeviceModalityBit-DepthImage Resolution [Pixels]Pixel Spacing [mm]
Patient 1Siemens Definition ASCT16512 × 5120.81/0.81
Patient 2Siemens Somatom ForceCT16512 × 5120.94/0.94
Convolution kernelPitch factor [mm]Number of slicesSlice thickness [mm]Body part examined
B20f1.0515610.6Abdomen
Br40d/21.47790.75Abdomen
Table 3. Summary of highest similarity results and their DRR projection per patient.
Table 3. Summary of highest similarity results and their DRR projection per patient.
Patient 1Patient 2
SSIM [-]0.420.50
CORR [-]0.260.36
DRR projection
Table 4. Summary of computation times in minutes of image registration per range of rotation angles of DRR projections across all patient datasets.
Table 4. Summary of computation times in minutes of image registration per range of rotation angles of DRR projections across all patient datasets.
Computation Times on PC 1Computation Times on PC 2
DRR ProjectionsPatient 1Patient 2Patient 1Patient 2
0.130.150.110.13
10°1.531.871.321.36
20°3.023.402.232.64
30°3.934.643.263.89
360°46.4555.3938.2445.15
Table 5. Summary of similarity metrics and mean computation times of two datasets in an angle range of 30°.
Table 5. Summary of similarity metrics and mean computation times of two datasets in an angle range of 30°.
Without Histogram MatchingWith Histogram Matching
SSIM [-]CORR [-]Mean Time [Seconds]DRR ProjectionSSIM [-]CORR [-]Mean Time [Seconds]DRR Projection
Patient 10.420.267.870.510.428.1715°
Patient 20.500.369.860.400.429.68345°
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Benda, V.; Kubicek, J.; Madeja, R.; Oczka, D.; Cerny, M.; Dostalova, K. Design of Proposed Software System for Prediction of Iliosacral Screw Placement for Iliosacral Joint Injuries Based on X-ray and CT Images. J. Clin. Med. 2023, 12, 2138. https://doi.org/10.3390/jcm12062138

AMA Style

Benda V, Kubicek J, Madeja R, Oczka D, Cerny M, Dostalova K. Design of Proposed Software System for Prediction of Iliosacral Screw Placement for Iliosacral Joint Injuries Based on X-ray and CT Images. Journal of Clinical Medicine. 2023; 12(6):2138. https://doi.org/10.3390/jcm12062138

Chicago/Turabian Style

Benda, Vojtech, Jan Kubicek, Roman Madeja, David Oczka, Martin Cerny, and Kamila Dostalova. 2023. "Design of Proposed Software System for Prediction of Iliosacral Screw Placement for Iliosacral Joint Injuries Based on X-ray and CT Images" Journal of Clinical Medicine 12, no. 6: 2138. https://doi.org/10.3390/jcm12062138

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop