Next Article in Journal
Overcoming Underpowering in the Outcome Analysis of Repaired—Tetralogy of Fallot: A Multicenter Database from the CMR/CT Working Group of the Italian Pediatric Cardiology Society (SICPed)
Next Article in Special Issue
Improving Respiratory Infection Diagnosis with Deep Learning and Combinatorial Fusion: A Two-Stage Approach Using Chest X-ray Imaging
Previous Article in Journal
More Than a Decade of GeneXpert® Mycobacterium tuberculosis/Rifampicin (Ultra) Testing in South Africa: Laboratory Insights from Twenty-Three Million Tests
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep-Learning-Based Automated Rotator Cuff Tear Screening in Three Planes of Shoulder MRI

1
Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea
2
Advanced Medical Imaging Institute, Korea University College of Medicine, Seoul 02841, Republic of Korea
3
AI Center, Korea University Anam Hospital, Seoul 02841, Republic of Korea
4
Institute for Healthcare Service Innovation, College of Medicine, Korea University, Seoul 02841, Republic of Korea
5
JLK Inc., Seoul 06141, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2023, 13(20), 3254; https://doi.org/10.3390/diagnostics13203254
Submission received: 26 September 2023 / Revised: 6 October 2023 / Accepted: 16 October 2023 / Published: 19 October 2023

Abstract

:
This study aimed to develop a screening model for rotator cuff tear detection in all three planes of routine shoulder MRI using a deep neural network. A total of 794 shoulder MRI scans (374 men and 420 women; aged 59 ± 11 years) were utilized. Three musculoskeletal radiologists labeled the rotator cuff tear. The YOLO v8 rotator cuff tear detection model was then trained; training was performed with all imaging planes simultaneously and with axial, coronal, and sagittal images separately. The performances of the models were evaluated and compared using receiver operating curves and the area under the curve (AUC). The AUC was the highest when using all imaging planes (0.94; p < 0.05). Among a single imaging plane, the axial plane showed the best performance (AUC: 0.71), followed by the sagittal (AUC: 0.70) and coronal (AUC: 0.68) imaging planes. The sensitivity and accuracy were also the highest in the model with all-plane training (0.98 and 0.96, respectively). Thus, deep-learning-based automatic rotator cuff tear detection can be useful for detecting torn areas in various regions of the rotator cuff in all three imaging planes.

1. Introduction

The rotator cuff stabilizes the glenohumeral joint during movement by compressing the humeral head against the glenoid [1]. The rotator cuff comprises the supraspinatus, infraspinatus, teres minor, and subscapularis muscles. Rotator cuff tears are the most likely source of shoulder pain in adults [2,3]. The incidence of rotator cuff tears is increasing with the improving life expectancy and it may affect up to 20–40% of people according to the report [4]. Although the exact pathogenesis remains controversial, a combination of intrinsic and extrinsic factors is likely responsible for most rotator cuff tears. Arthroscopic rotator cuff repair has become the standard care for rotator cuff tears [5,6]. At times, distinguishing a rotator cuff tear from other conditions, such as adhesive capsulitis, solely through physical examinations can be challenging. Therefore, imaging modalities play crucial roles in diagnosing rotator cuff tears. Both magnetic resonance imaging (MRI) and ultrasonography (US) are the best noninvasive modalities for identifying and evaluating rotator cuff lesions [7,8]. MRI allows for the evaluation of entire cuff lesions with a sufficient field of view, while US provides a limited window for rotator cuff tendons and is largely dependent on the operator’s skill and experience. As rotator cuff tendons are curved structures surrounding the humeral head, a single imaging plane has limitations in evaluating the entire cuff lesions. Some lesions may be well visualized in the coronal plane and some may be visualized in the sagittal or axial plane. Due to the anatomical and pathological complexities, even experienced musculoskeletal radiologists require attention and time to interpret shoulder MRIs. In addition to increasing incidence, advancements in scanning techniques have reduced scan times, leading to an increased number of examinations within a limited timeframe, and resulting in a considerable increase in the number of MRIs that need to be read. Despite the increase in the number of shoulder MRI scans, there is an insufficient number of experienced musculoskeletal radiologists, both in terms of spatial distribution and availability over time, from a realistic perspective. On a positive note, the growing number of shoulder MRI examinations can provide a wealth of data for developing automated deep learning models for MRI interpretation.
With the advent of deep learning techniques, numerous models have been applied to screen and assist in labor-intensive radiological tasks in musculoskeletal imaging, such as bone age assessment in the hand or elbow, fracture detection in axial or peripheral skeletons, arthritis grading in knee or sacroiliac joints, muscle quality quantification, muscle and bone segmentation in various sites, and the clinical prediction of outcomes [9,10,11,12,13]. Most of these tasks are time-consuming processes and some of them may even be impossible for a human radiologist to conduct. In shoulder MRI, the diagnosis of rotator cuff tear and the quantification of rotator cuff muscle degeneration are common indications for applying deep learning techniques as well as imaging time acceleration [14,15,16,17,18,19,20]. Shoulder MRI typically consists of over a hundred images from various sequences and imaging planes, which takes considerable time for interpretation. One of the primary roles of shoulder MRI is to screen for rotator cuff tears, and several previous studies have utilized deep learning techniques for rotator cuff tear detection in shoulder MRI [21,22,23,24,25]. Despite the good performances of reported studies, they have limitations in terms of the input data and labeling methods that can be applied in clinical practice. They used only coronal images or nonfat-suppressed images or classified them based on operational records, and did not consider subscapularis and infraspinatus tears. Because a rotator cuff tear can be obscured in a single imaging plane according to its location and size, evaluations in all planes are required.
This study aimed to develop and validate a screening model for detecting a rotator cuff tear in all three planes of routine shoulder MRI using a deep neural network (DNN).

2. Materials and Methods

This study was approved by the Institutional Review Board of Korea University Anam Hospital. Shoulder MRI scans were conducted between January 2010 and September 2019. All shoulder MRIs were performed using 3-Tesla machines, including Magnetom TrioTrim, Skyra, and Prisma (Siemens, Erlangen, Germany), as well as Achieva (Philips, Best, The Netherlands). The shoulder MRIs were conducted with a dedicated shoulder coil, with patients in the supine position and their shoulder joints neutrally positioned, with palms facing upward. These scans included at least one fat-suppressed axial, coronal, and sagittal imaging plane, with the imaging planes set to be orthogonal to the glenohumeral joint. The exclusion criteria comprised individuals under 20 years of age, contrast-enhanced examinations, arthrograpic examinations, postoperative images, and poor image quality due to factors such as motion artifacts and improper shoulder positioning. To ensure the highest standards of image quality, two board-certified musculoskeletal radiologists, each with more than 3 years of experience, assessed the appropriateness of each image. This assessment was based on both the radiologic reports and, on occasion, the images themselves. All images were stored in the Digital Imaging and Communications in Medicine (DICOM) format, which is a standard format for medical images, and they underwent a thorough anonymization process to protect patient privacy.

2.1. Image Labeling

Three board-certified musculoskeletal radiologists categorized the images as either “tear” or “no tear”, with evident full or partial fiber disruption of the tendon categorized as a “tear” and a normal tendon fiber or simple signal change of the tendon without fiber disruption regarded as “no tear”. All rotator cuff tears located in the supraspinatus, infraspinatus, teres minor, and subscapularis were meticulously examined in all axial, coronal, and sagittal planes of the shoulder MRI scans. Torn tendon spaces were segmented by trained researchers under the supervision of radiologists using AIX 2.0.2 (JLK Inc., Seoul, Republic of Korea). The flowchart of the methodology is demonstrated in Figure 1.
The segmentation process involved the creation of freeform lines outlining all rotator cuff tears, encompassing the supraspinatus, infraspinatus, and subscapularis, within all three imaging planes of fat-suppressed T2-weighted or proton density-weighted images (Figure 2). The cross-link function provided by the software assisted in identifying the corresponding point in the coronal image, which corresponds to the sagittal and axial images. In cases of multiple lesions, each rotator cuff tear was segmented separately. Subsequent to the segmentation procedure, rectangular patches were automatically generated, encompassing irregularly shaped torn segments. These patches were then utilized for the implementation of the model.

2.2. Model Implementation

The dataset was randomly divided into 70% for training, 10% for tuning, and 20% for the final evaluation. The algorithm was designed to detect and predict the rotator cuff tear. We used the original architecture of you only look once (YOLO) v8 [26,27] with a higher frequency of occlusion and small spatial sizes to improve the detection performance in shoulder MRI. This network was deeply fine-tuned and trained with regions of interest (ROIs) of the shoulder lesions and normal. After training, we examined the location and classification of lesions in the test sets. The primary purpose of YOLO v8 [26,27] involved partitioning each image using an S × S grid. The preceding iterations of YOLO v8, such as a novel neural network structure, incorporated both the Feature Pyramid Network (FPN) and Path Aggregation Network (PAN), along with an innovative annotation tool streamlining the labeling procedure. This annotation tool has multiple beneficial functionalities, including automated labeling, labeling shortcuts, and adaptable hotkeys. The amalgamation of these attributes simplified the process of annotating images for model training. The detection outcome should achieve a score of 0.5 or higher to emphasize the significance of both classification and detection [27]. All the datasets were resized to 512 × 512 pixels for training and inference. To enhance the performance of the model, the training datasets were preprocessed via histogram matching to align the histogram distributions across all images. In addition, all images underwent intensity normalization, which involved subtracting the mean and dividing it by the standard deviation. Resizing was achieved using third-order spline interpolation with linear interpolation. Furthermore, various image augmentation techniques were employed, including adjustments to the brightness, contrast, Gaussian noise, blur, inversion, and sharpness, and geometric modifications such as shifting, zooming, and rotation. These augmentations were employed to mitigate biases specific to scanners and bolster the resilience of neural networks against additional sources of variability unrelated to radiological categories. The tuning loss plateaued after an epoch, and the model with the lowest tuning loss was selected using the ADAM optimizer [28]. The structure of the model is illustrated in Figure 3.
These datasets were loaded onto a Graphics Processing Unit (GPU) devbox server with Ubuntu 20.04, CUDA 11.2, and cuDNN 11.1 (NVIDIA Corporation, Santa Clara, CA, USA), which is part of the NVIDIA deep learning software development kit (version 11.1). The GPU server contained four 48 GB A6000. We used an initial learning rate of 0.001 that decayed by a factor of 10 each time.

2.3. Statistical Analysis

We calculated the area under the curve (AUC) for the receiver operating characteristic (ROC) curve and accuracy using the pROC (version 1.10) package in R (version 1.42; R Foundation for Statistical Computing, Vienna, Austria). DeLong tests were performed to compare the AUC values of the eight classifier models using the pROC package in R version 1.42. Statistical significance was set at a two-sided p < 0.05.

3. Results

3.1. Subject Demographics

A total of 794 shoulder MRI scans were included (374 men and 420 women; aged 59 ± 11 years). Out of these, 100 subjects had no evidence of rotator cuff tear, while the remaining 694 had a rotator cuff tear. We extracted a total of 8756 image patches from patients with a confirmed rotator cuff tear and 2052 patches from those with no rotator cuff tears. The data distribution is presented in Table 1.

3.2. Performance of the Model

We first evaluated the performance of the model using the intersection of union (IOU) and confidence score (classification value of lesions) to evaluate the accuracy between the predicted bounding box and ground truth. If the IOU was over 0.5, the predicted lesions in test dataset were defined as correct. In addition, we used non-maximum suppression (NMS) to remove duplicate boxes for the inference of tears. To evaluate the detection performance based on YOLO v8, the cutoff threshold (0.2) was determined using the sensitivity and average false positives in the first algorithm.
The highest AUC was achieved when all imaging planes were used (0.94), and this difference was statistically significant when compared to each individual imaging plane (p = 0.0002, 0.00006, and 0.00002, respectively). Sensitivity, precision, and accuracy were also the highest in the model with all-plane training. As a single imaging plane, the axial plane showed the highest AUC (0.71), followed by the sagittal (0.70) and axial (0.68) planes. The highest accuracy was achieved when using all imaging planes (96%). Regarding accuracy with a single imaging plane, the sagittal plane showed the highest accuracy (70%), outperforming the axial and coronal planes (58% and 55%, respectively). The performance of the model is summarized in Table 2, and the ROC curves for the model using all imaging planes and each individual imaging plane are demonstrated in Figure 4.

4. Discussion

In this study, we developed a screening algorithm based on YOLO v8 [26,27] to predict rotator cuff tear in shoulder MRI using high-quality datasets confirmed by expert radiologists. We used whole MRI images as the input data and used patch images drawn by musculoskeletal radiologists to train and fine-tune our algorithms. The advantage of this network is that it can simultaneously predict rotator cuff tear at various locations. It is important to determine whether the detection ability of the algorithm is similar to that of the expert radiologists in a computer-aided detection and diagnosis system. To the best of our knowledge, this is the first study to screen rotator cuff tear at all locations in all imaging planes.
The use of AI, especially deep learning techniques, has been introduced in various fields of musculoskeletal imaging, including radiography, computed tomography (CT), MRI, and US. The integration of deep learning techniques into radiography has yielded promising outcomes. Studies have shown its capability for swift and precise bone age assessment in hand or elbow radiographs, fracture detection across diverse anatomical regions, and the grading of osteoarthritis in knee radiographs [9,10,11,29]. In shoulder imaging, Kim et al. suggested using the deep learning model for ruling out rotator cuff tear in a shoulder radiograph, which redefined the role of a conventional radiograph [30]. Lee et al. reported a deep-learning-based model for analyzing a rotator cuff tear using ultrasound imaging [31]. Studies on quantifying rotator cuff muscle quality using deep learning has primarily relied on CTs and MRIs and have shown promising results. [16,32]. These tasks are recognized as labor-intensive, time-consuming, and, in some cases, even impossible for radiologists to perform. In the context of shoulder MRI, it is understandable that the evaluation of rotator cuff tears presents another promising application for deep learning, especially considering its increasing number of examinations and the lack of experts [21,22,23,24,25].
Shoulder MRI is difficult to interpret even by clinicians because of the anatomic complexity of the shoulder joint with small curved tendons and ligament structures. All three planes should be examined carefully because a partial volume averaging effect can obscure the lesion when referring to only a single imaging plane [33,34]. Although several studies have applied deep learning techniques to interpret shoulder MRI to diagnose rotator cuff tears, there have been limitations owing to the quality of the input data regarding the imaging sequences, imaging planes, and reference standards [21,22,23,24,25]. Kim et al. [21] and Sezer et al. [22] proposed a model for classifying rotator cuff tears from MRI, but their models were trained using only coronal images. Shim et al. [23] reported a rotator cuff tear classification model using a 3D convolutional neural network using three plane images. However, the labeling was based on the arthroscopic finding and used the DeOrio and Cofield classification system [35], which usually evaluates supraspinatus tears. Yao et al. [24] proposed a deep learning model for detecting only supraspinatus tears on T2-weighted coronal images. The far anterior portion of the supraspinatus or the far posterior part of the infraspinatus is not orthogonally perpendicular to the coronal plane, resulting in an unclear delineation of rotator cuff tears in these locations in the coronal images. This phenomenon applies to other imaging planes and to other rotator cuff areas as well. Many previous studies focused only on the supraspinatus tendon or did not mention subscapularis tears, which might have been overlooked and sometimes described as “hidden lesions” [36]. Although the supraspinatus is the most common location of rotator cuff tears, the subscapularis tendon, which is best seen in the sagittal and axial planes, should be included in screening. Our model detects rotator cuff tears in all imaging planes and assists in the diagnosis of rotator cuff tears within the numerous images found in shoulder MRI. This capability is potentially valuable for both diagnosis and treatment planning.
In our model implementation, we utilized the YOLOv8 model. In the preliminary evaluation, we compared the DenseNet classification model with the YOLOv8 model. However, the performance of the DenseNet model (AUC: 0.93; accuracy: 0.90) was not superior to the YOLOv8 model in the validation set. Despite several limitations, such as a lower accuracy in detecting small targets and substantial computational power requirements for feature extraction, YOLO is a powerful object detection algorithm that can be applied in various fields, notably in medical applications encompassing radiology, oncology, and surgery [37]. By rapidly identifying and localizing lesions or anatomical structures, YOLO has significantly improved patient outcomes while reducing diagnosis and treatment times and enhancing the efficiency and accuracy of medical diagnoses and procedures [37]. Recently, a completely new repository, which includes YOLOv8, has been introduced for the YOLO model. This repository serves as an integrated framework for training object detection, instance segmentation, and image classification models. YOLOv8 is a recent addition to the YOLO series and stands out as an anchor-free model. Unlike previous versions that rely on anchor box offsets, YOLOv8 directly predicts the centers of objects, resulting in faster NMS speeds. The model provides outputs, including box coordinates, confidence scores, and class labels (lesions). Despite the known drawbacks of the YOLO model, the YOLOv8 model has been used in various medical image applications in the field of radiology. In studies involving radiography and MRI, these models have demonstrated high accuracy in detecting conditions such as osteochondritis dissecans in elbow radiographs, identifying foreign objects in chest radiographs, and detecting tumors in brain MRI scans [38,39,40].
In this study, the model that was trained with all imaging planes exhibited the best performance (AUC: 0.94), while the model that was trained with a single imaging plane demonstrated a relatively lower performance (AUC: 0.71–0.68). Sensitivity, precision, and accuracy were also the highest in the model with all-plane training. Although the variation in the number of training images could be a contributing factor, the distinct shapes of tears in different imaging planes might contribute to enhancing the model’s rotator cuff tear detection performance. Furthermore, despite the small difference, the axial plane displayed the highest performance among the single imaging planes. This finding is intriguing, as the coronal or sagittal plane is generally preferred for rotator cuff tear detection, given that supraspinatus tears are the most common and well visualized in the coronal or sagittal planes [41]. To interpret the results differently, it might be possible that axial images contain more information about rotator cuff tears than conventionally believed. Human readers tend to focus on specific imaging planes when the rotator cuff tear is evident; however, the deep learning model independently screens all images and assesses the presence of tears. This functionality will assist radiologists in the labor-intensive and time-consuming process of MRI interpretation. In addition, if it is possible to find AI-driven imaging biomarkers for rotator cuff tears in axial planes, it might be an additional value for deep learning research.
Our preliminary study had several limitations. Firstly, we only conducted an internal validation test. Since our dataset comprised routine MRI protocols from various machines and vendors, it exhibited a significant degree of heterogeneity. Nonetheless, external validation using shoulder MRIs from other machines or institutions with concrete reference standards by multiple readers is necessary to validate our results. Additionally, a reader study comparing the model with human experts might also be required. Secondly, while our methods demonstrated good performance in terms of the AUC (0.94), achieving an enhanced specificity score is crucial for clinical applications. These challenges can potentially be addressed through the utilization of larger datasets, diverse augmentations, and algorithm enhancements. Thirdly, we did not specify the anatomical location of the rotator cuff tear, such as whether it affected the supraspinatus, infraspinatus, or subscapularis. Our model was primarily designed to screen for rotator cuff tears in numerous shoulder MRI images, and as such, it did not include the nomination of anatomical locations in its labeling. However, for practical clinical use by general physicians and orthopedic surgeons, specifying the location in addition to detecting the lesion is essential. With the additional detailed labeling or application of an automated anatomic labeling algorithm [42], the next version of the model can provide information about the location and size of the rotator cuff tear. Finally, our model combined both full-thickness and partial-thickness tears under the rotator cuff tear category. Subclassifying tears into full-thickness and partial-thickness categories may be necessary, as clinical decision making can vary based on the tear thickness. To address these issues, further development involving a larger dataset and more detailed labeling that includes the class of tear thickness is warranted. Since different grading systems are applied to supraspinatus and subscapularis tears, it might be necessary to take a step-wise approach: initially screening for rotator cuff tears using our preliminary model and subsequently classifying the tear details including the location and thickness using secondary models.

5. Conclusions

Our deep-learning-based automatic rotator cuff tear screening model effectively aided in the detection of rotator cuff tears across all three image planes. With the increasing number of shoulder MRI scans and a growing demand for lesion detection support, a deep learning model can effectively assist in detecting rotator cuff tears.

Author Contributions

Conceptualization, K.-S.A., H.-J.P. and Y.C.; data curation, K.-S.A., H.-J.P., Y.-S.K. and S.L.; formal analysis, K.-S.A. and Y.C.; investigation, C.H.K.; methodology, K.-S.A., H.-J.P. and Y.C.; software, H.-J.P., Y.C. and D.K.; supervision, K.-S.A. and C.H.K.; validation, K.-C.L. and Y.C.; visualization, K.-S.A., H.-J.P. and Y.C.; writing—original draft, K.-C.L. and Y.C.; writing—review and editing, K.-S.A. and C.H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by a grant from the Information and Communications Promotion Fund (H1301-23-1002) through the National IT Industry Promotion Agency (NIPA), funded by the Ministry of Science and ICT (MSIT), Republic of Korea.

Institutional Review Board Statement

This study was approved by the Institutional Review Board and Ethics Committee of the Korea University Anam Hospital (IRB number: 2021AN0300).

Informed Consent Statement

Informed consent was waived because the data were collected retrospectively and analyzed anonymously.

Data Availability Statement

The raw/processed MR image dataset analyzed in this study is not publicly available.

Acknowledgments

We thank Hyun Ki Ko for his contributions to this research in terms of data acquisition and preparation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maruvada, S.; Madrazo-Ibarra, A.; Varacallo, M. Anatomy, Rotator Cuff. In StatPearls; StatPearls Publishing LLC.: Treasure Island, FL, USA, 2023. [Google Scholar]
  2. Zoga, A.C.; Kamel, S.I.; Hynes, J.P.; Kavanagh, E.C.; O’Connor, P.J.; Forster, B.B. The Evolving Roles of MRI and Ultrasound in First-Line Imaging of Rotator Cuff Injuries. AJR Am. J. Roentgenol. 2021, 217, 1390–1400. [Google Scholar] [CrossRef] [PubMed]
  3. Yamamoto, A.; Takagishi, K.; Osawa, T.; Yanagawa, T.; Nakajima, D.; Shitara, H.; Kobayashi, T. Prevalence and risk factors of a rotator cuff tear in the general population. J. Shoulder Elbow Surg. 2010, 19, 116–120. [Google Scholar] [CrossRef] [PubMed]
  4. Via, A.G.; De Cupis, M.; Spoliti, M.; Oliva, F. Clinical and biological aspects of rotator cuff tears. Muscles Ligaments Tendons J. 2013, 3, 70–79. [Google Scholar] [CrossRef] [PubMed]
  5. Pandey, V.; Jaap Willems, W. Rotator cuff tear: A detailed update. Asia Pac. J. Sports Med. Arthrosc. Rehabil. Technol. 2015, 2, 1–14. [Google Scholar] [CrossRef] [PubMed]
  6. Rho, J.Y.; Kwon, Y.S.; Choi, S. Current Concepts and Recent Trends in Arthroscopic Treatment of Large to Massive Rotator Cuff Tears: A Review. Clin. Shoulder Elb. 2019, 22, 50–57. [Google Scholar] [CrossRef]
  7. Morag, Y.; Jacobson, J.A.; Miller, B.; De Maeseneer, M.; Girish, G.; Jamadar, D. MR imaging of rotator cuff injury: What the clinician needs to know. RadioGraphics 2006, 26, 1045–1065. [Google Scholar] [CrossRef]
  8. Sharma, G.; Bhandary, S.; Khandige, G.; Kabra, U. MR Imaging of Rotator Cuff Tears: Correlation with Arthroscopy. J. Clin. Diagn. Res. 2017, 11, TC24–TC27. [Google Scholar] [CrossRef]
  9. Ahn, K.S.; Bae, B.; Jang, W.Y.; Lee, J.H.; Oh, S.; Kim, B.H.; Lee, S.W.; Jung, H.W.; Lee, J.W.; Sung, J.; et al. Assessment of rapidly advancing bone age during puberty on elbow radiographs using a deep neural network model. Eur. Radiol. 2021, 31, 8947–8955. [Google Scholar] [CrossRef]
  10. Lee, K.C.; Choi, I.C.; Kang, C.H.; Ahn, K.S.; Yoon, H.; Lee, J.J.; Kim, B.H.; Shim, E. Clinical Validation of an Artificial Intelligence Model for Detecting Distal Radius, Ulnar Styloid, and Scaphoid Fractures on Conventional Wrist Radiographs. Diagnostics 2023, 13, 1657. [Google Scholar] [CrossRef]
  11. Zhang, B.; Jia, C.; Wu, R.; Lv, B.; Li, B.; Li, F.; Du, G.; Sun, Z.; Li, X. Improving rib fracture detection accuracy and reading efficiency with deep learning-based detection software: A clinical evaluation. Br. J. Radiol. 2021, 94, 20200870. [Google Scholar] [CrossRef]
  12. Saeed, M.U.; Dikaios, N.; Dastgir, A.; Ali, G.; Hamid, M.; Hajjej, F. An automated deep learning approach for spine segmentation and vertebrae recognition using computed tomography images. Diagnostics 2023, 13, 2658. [Google Scholar] [CrossRef] [PubMed]
  13. Medina, G.; Buckless, C.G.; Thomasson, E.; Oh, L.S.; Torriani, M. Deep learning method for segmentation of rotator cuff muscles on MR images. Skeletal Radiol. 2021, 50, 683–692. [Google Scholar] [CrossRef] [PubMed]
  14. Familiari, F.; Galasso, O.; Massazza, F.; Mercurio, M.; Fox, H.; Srikumaran, U.; Gasparini, G. Artificial intelligence in the management of rotator cuff tears. Int. J. Environ. Res. Public Health 2022, 19, 16779. [Google Scholar] [CrossRef]
  15. Kim, J.Y.; Ro, K.; You, S.; Nam, B.R.; Yook, S.; Park, H.S.; Yoo, J.C.; Park, E.; Cho, K.; Cho, B.H.; et al. Development of an automatic muscle atrophy measuring algorithm to calculate the ratio of supraspinatus in supraspinous fossa using deep learning. Comput. Methods Programs Biomed. 2019, 182, 105063. [Google Scholar] [CrossRef]
  16. Ro, K.; Kim, J.Y.; Park, H.; Cho, B.H.; Kim, I.Y.; Shim, S.B.; Choi, I.Y.; Yoo, J.C. Deep-learning framework and computer assisted fatty infiltration analysis for the supraspinatus muscle in MRI. Sci. Rep. 2021, 11, 15065. [Google Scholar] [CrossRef] [PubMed]
  17. Riem, L.; Feng, X.; Cousins, M.; DuCharme, O.; Leitch, E.B.; Werner, B.C.; Sheean, A.J.; Hart, J.; Antosh, I.J.; Blemker, S.S. A Deep Learning Algorithm for Automatic 3D Segmentation of Rotator Cuff Muscle and Fat from Clinical MRI Scans. Radiol. Artif. Intell. 2023, 5, e220132. [Google Scholar] [CrossRef] [PubMed]
  18. Hess, H.; Ruckli, A.C.; Bürki, F.; Gerber, N.; Menzemer, J.; Burger, J.; Schär, M.; Zumstein, M.A.; Gerber, K. Deep-Learning-Based Segmentation of the Shoulder from MRI with Inference Accuracy Prediction. Diagnostics 2023, 13, 1668. [Google Scholar] [CrossRef]
  19. Gupta, P.; Haeberle, H.S.; Zimmer, Z.R.; Levine, W.N.; Williams, R.J.; Ramkumar, P.N. Artificial intelligence-based applications in shoulder surgery leaves much to be desired: A systematic review. JSES Rev. Rep. Tech. 2023, 3, 189–200. [Google Scholar] [CrossRef]
  20. Hahn, S.; Yi, J.; Lee, H.J.; Lee, Y.; Lee, J.; Wang, X.; Fung, M. Comparison of deep learning-based reconstruction of PROPELLER Shoulder MRI with conventional reconstruction. Skeletal Radiol. 2023, 52, 1545–1555. [Google Scholar] [CrossRef]
  21. Kim, M.; Park, H.M.; Kim, J.Y.; Kim, S.H.; Hoeke, S.; De Neve, W. MRI-based diagnosis of rotator cuff tears using deep learning and weighted linear combinations. In Proceedings of the Machine Learning for Healthcare Conference, PMLR 2020, Virtual Event, 7–8 August 2020; pp. 292–308. [Google Scholar]
  22. Sezer, A.; Sezer, H.B. Capsule network-based classification of rotator cuff pathologies from MRI. Comput. Electr. Eng. 2019, 80, 106480. [Google Scholar] [CrossRef]
  23. Shim, E.; Kim, J.Y.; Yoon, J.P.; Ki, S.Y.; Lho, T.; Kim, Y.; Chung, S.W. Automated rotator cuff tear classification using 3D convolutional neural network. Sci. Rep. 2020, 10, 15632. [Google Scholar] [CrossRef] [PubMed]
  24. Yao, J.; Chepelev, L.; Nisha, Y.; Sathiadoss, P.; Rybicki, F.J.; Sheikh, A.M. Evaluation of a deep learning method for the automated detection of supraspinatus tears on MRI. Skeletal Radiol. 2022, 51, 1765–1775. [Google Scholar] [CrossRef] [PubMed]
  25. Lin, C.C.; Wang, C.N.; Ou, Y.K.; Fu, J. Combined image enhancement, feature extraction, and classification protocol to improve detection and diagnosis of rotator-cuff tears on MR imaging. Magn. Reson. Med. Sci. 2014, 13, 155–166. [Google Scholar] [CrossRef] [PubMed]
  26. Redmon, J.; Farhadi, A. YOLO 9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
  27. Reis, D.; Kupec, J.; Hong, J.; Daoudi, A. Real-Time Flying Object Detection with YOLOv8. arXiv 2023, arXiv:2305.09972. [Google Scholar]
  28. Zhang, Z. Improved adam optimizer for deep neural networks. In Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), Banff, AB, Canada, 4–6 June 2018; pp. 1–2. [Google Scholar] [CrossRef]
  29. Gyftopoulos, S.; Lin, D.; Knoll, F.; Doshi, A.M.; Rodrigues, T.C.; Recht, M.P. Artificial Intelligence in Musculoskeletal Imaging: Current Status and Future Directions. AJR Am. J. Roentgenol. 2019, 213, 506–513. [Google Scholar] [CrossRef] [PubMed]
  30. Kim, Y.; Choi, D.; Lee, K.J.; Kang, Y.; Ahn, J.M.; Lee, E.; Lee, J.W.; Kang, H.S. Ruling out rotator cuff tear in shoulder radiograph series using deep learning: Redefining the role of conventional radiograph. Eur. Radiol. 2020, 30, 2843–2852. [Google Scholar] [CrossRef]
  31. Lee, K.; Kim, J.Y.; Lee, M.H.; Choi, C.H.; Hwang, J.Y. Imbalanced Loss-Integrated Deep-Learning-Based Ultrasound Image Analysis for Diagnosis of Rotator-Cuff Tear. Sensors 2021, 21, 2214. [Google Scholar] [CrossRef]
  32. Taghizadeh, E.; Truffer, O.; Becce, F.; Eminian, S.; Gidoin, S.; Terrier, A.; Farron, A.; Büchler, P. Deep learning for the rapid automatic quantification and characterization of rotator cuff muscle degeneration from shoulder CT datasets. Eur. Radiol. 2021, 31, 181–190. [Google Scholar] [CrossRef]
  33. Goh, C.K.; Peh, W.C. Pictorial essay: Pitfalls in magnetic resonance imaging of the shoulder. Can. Assoc. Radiol. J. 2012, 63, 247–259. [Google Scholar] [CrossRef]
  34. Marcon, G.F.; Macedo, T.A. Artifacts and pitfalls in shoulder magnetic resonance imaging. Radiol. Bras. 2015, 48, 242–248. [Google Scholar] [CrossRef]
  35. Takeuchi, N.; Kozono, N.; Nishii, A.; Matsuura, K.; Ishitani, E.; Onizuka, T.; Mizuki, Y.; Kimura, T.; Yuge, H.; Uchimura, T.; et al. Prevalence and predisposing factors of neuropathic pain in patients with rotator cuff tears. J. Orthop. Sci. 2023. [Google Scholar] [CrossRef] [PubMed]
  36. Neyton, L.; Daggett, M.; Kruse, K.; Walch, G. The hidden lesion of the subscapularis: Arthroscopically revisited. Arthrosc. Tech. 2016, 5, e877–e881. [Google Scholar] [CrossRef]
  37. Qureshi, R.; Ragab, M.G.; Abdulkader, S.J.; Alqushaib, A.; Sumiea, E.H.; Alhussian, H. A Comprehensive Systematic Review of YOLO for Medical Object Detection (2018 to 2023). TechRxiv 2023. [Google Scholar] [CrossRef]
  38. Inui, A.; Mifune, Y.; Nishimoto, H.; Mukohara, S.; Fukuda, S.; Kato, T.; Furukawa, T.; Tanaka, S.; Kusunose, M.; Takigami, S.; et al. Detection of elbow OCD in the ultrasound image by artificial intelligence using YOLOv8. Appl. Sci. 2023, 13, 7623. [Google Scholar] [CrossRef]
  39. Kufel, J.; Bargieł-Łączek, K.; Koźlik, M.; Czogalik, Ł.; Dudek, P.; Magiera, M.; Bartnikowska, W.; Lis, A.; Paszkiewicz, I.; Kocot, S.; et al. Chest X-ray Foreign Objects Detection Using Artificial Intelligence. J. Clin. Med. 2023, 12, 5841. [Google Scholar] [CrossRef] [PubMed]
  40. Terzi, D.S.; Azginoglu, N. In-Domain Transfer Learning Strategy for Tumor Detection on Brain MRI. Diagnostics 2023, 13, 2110. [Google Scholar] [CrossRef]
  41. Longo, U.G.; De Salvatore, S.; Zollo, G.; Calabrese, G.; Piergentili, I.; Loppini, M.; Denaro, V. Magnetic resonance imaging could precisely define the mean value of tendon thickness in partial rotator cuff tears. BMC Musculoskelet. Disord. 2023, 24, 718. [Google Scholar] [CrossRef]
  42. Kim, H.; Shin, K.; Kim, H.; Lee, E.S.; Chung, S.W.; Koh, K.H.; Kim, N. Can deep learning reduce the time and effort required for manual segmentation in 3D reconstruction of MRI in rotator cuff tears? PLoS ONE 2022, 17, e0274075. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the study (RCT: rotator cuff tear).
Figure 1. Flowchart of the study (RCT: rotator cuff tear).
Diagnostics 13 03254 g001
Figure 2. Segmentation of torn rotator cuff tendons on all three imaging planes. The segmentation is performed by drawing freeform lines (red) outlining all rotator cuff tears, including the supraspinatus, infraspinatus, and subscapularis, within all three imaging planes. Multiple areas of rotator cuff tears were segmented separately.
Figure 2. Segmentation of torn rotator cuff tendons on all three imaging planes. The segmentation is performed by drawing freeform lines (red) outlining all rotator cuff tears, including the supraspinatus, infraspinatus, and subscapularis, within all three imaging planes. Multiple areas of rotator cuff tears were segmented separately.
Diagnostics 13 03254 g002
Figure 3. Network architecture of prediction model for rotator cuff tear.
Figure 3. Network architecture of prediction model for rotator cuff tear.
Diagnostics 13 03254 g003
Figure 4. ROC curves for the model using all imaging planes (red) and using only axial (blue), sagittal (green), and coronal (black) images.
Figure 4. ROC curves for the model using all imaging planes (red) and using only axial (blue), sagittal (green), and coronal (black) images.
Diagnostics 13 03254 g004
Table 1. The number of the study participants and image patches.
Table 1. The number of the study participants and image patches.
SubjectsTrainingTuningTesting
No RCT
(n = 100)
Number of Patches1511150391
PlaneAxial56651152
Coronal3623786
Sagittal58362153
RCT
(n = 694)
Number of Patches64277951534
PlaneAxial753237435
Coronal2415289547
Sagittal2233269552
RCT: rotator cuff tear.
Table 2. The performance of the rotator cuff tear detection model for shoulder MRI.
Table 2. The performance of the rotator cuff tear detection model for shoulder MRI.
AUCSensitivitySpecificityPrecisionAccuracyF1 Score
ALL0.9498%91%98%96%97%
Axial0.7151%100%100%58%68%
Sagittal0.7072%63%92%70%81%
Coronal0.6848%95%98%55%64%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, K.-C.; Cho, Y.; Ahn, K.-S.; Park, H.-J.; Kang, Y.-S.; Lee, S.; Kim, D.; Kang, C.H. Deep-Learning-Based Automated Rotator Cuff Tear Screening in Three Planes of Shoulder MRI. Diagnostics 2023, 13, 3254. https://doi.org/10.3390/diagnostics13203254

AMA Style

Lee K-C, Cho Y, Ahn K-S, Park H-J, Kang Y-S, Lee S, Kim D, Kang CH. Deep-Learning-Based Automated Rotator Cuff Tear Screening in Three Planes of Shoulder MRI. Diagnostics. 2023; 13(20):3254. https://doi.org/10.3390/diagnostics13203254

Chicago/Turabian Style

Lee, Kyu-Chong, Yongwon Cho, Kyung-Sik Ahn, Hyun-Joon Park, Young-Shin Kang, Sungshin Lee, Dongmin Kim, and Chang Ho Kang. 2023. "Deep-Learning-Based Automated Rotator Cuff Tear Screening in Three Planes of Shoulder MRI" Diagnostics 13, no. 20: 3254. https://doi.org/10.3390/diagnostics13203254

APA Style

Lee, K. -C., Cho, Y., Ahn, K. -S., Park, H. -J., Kang, Y. -S., Lee, S., Kim, D., & Kang, C. H. (2023). Deep-Learning-Based Automated Rotator Cuff Tear Screening in Three Planes of Shoulder MRI. Diagnostics, 13(20), 3254. https://doi.org/10.3390/diagnostics13203254

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop