Next Article in Journal
Tissue Expansion after Non-Skin-Sparing Mastectomy: A Comparative Study of Expansion Courses of Prepectoral and Subpectoral Tissue Expander Placement with Acellular Dermal Matrix
Previous Article in Journal
The Safety of Bilateral Simultaneous Hip and Knee Arthroplasty versus Staged Arthroplasty in a High-Volume Center Comparing Blood Loss, Peri- and Postoperative Complications, and Early Functional Outcome
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Preliminary Study on the Diagnostic Performance of a Deep Learning System for Submandibular Gland Inflammation Using Ultrasonography Images

Department of Oral and Maxillofacial Radiology, School of Dentistry, Aichi Gakuin University, Nagoya 464-8651, Japan
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2021, 10(19), 4508; https://doi.org/10.3390/jcm10194508
Submission received: 3 September 2021 / Revised: 24 September 2021 / Accepted: 28 September 2021 / Published: 29 September 2021
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)

Abstract

:
This study was performed to evaluate the diagnostic performance of deep learning systems using ultrasonography (USG) images of the submandibular glands (SMGs) in three different conditions: obstructive sialoadenitis, Sjögren’s syndrome (SjS), and normal glands. Fifty USG images with a confirmed diagnosis of obstructive sialoadenitis, 50 USG images with a confirmed diagnosis of SjS, and 50 USG images with no SMG abnormalities were included in the study. The training group comprised 40 obstructive sialoadenitis images, 40 SjS images, and 40 control images, and the test group comprised 10 obstructive sialoadenitis images, 10 SjS images, and 10 control images for deep learning analysis. The performance of the deep learning system was calculated and compared between two experienced radiologists. The sensitivity of the deep learning system in the obstructive sialoadenitis group, SjS group, and control group was 55.0%, 83.0%, and 73.0%, respectively, and the total accuracy was 70.3%. The sensitivity of the two radiologists was 64.0%, 72.0%, and 86.0%, respectively, and the total accuracy was 74.0%. This study revealed that the deep learning system was more sensitive than experienced radiologists in diagnosing SjS in USG images of two case groups and a group of healthy subjects in inflammation of SMGs.

1. Introduction

Among the various types of salivary gland lesions, inflammation is most common and has various underlying pathogeneses [1], including salivary flow obstruction, viral or bacterial infection, and autoimmune diseases such as Sjögren’s syndrome (SjS). Some of these diseases cause characteristic changes in the salivary gland parenchyma and are clearly visualized by ultrasonography (USG) [2,3,4,5,6,7,8,9]. Obstructive sialoadenitis is mainly due to sialoliths, most of which arise in the submandibular gland (SMG) or Wharton’s duct [2]. This condition can be diagnosed by USG, which exhibits a hyperechoic band and posterior shadow when a sialolith is present. However, the sialolith may not be detected in its early stage or when it is located adjacent to the inner surface of the mandible. Moreover, obstructive sialoadenitis occurs even in the absence of sialoliths. The parenchymal changes shown by USG in patients with this pathology vary depending on the disease stage. In the early stage, USG sometimes reveals no abnormality in the parenchyma, whereas other cases are characterized by enlargement and hypoechoic change. As the disease progresses, USG shows a mixed pattern with hyperechoic spots in the hypoechoic area, finally evolving to a homogeneous hyperechoic pattern that frequently exhibits atrophic parenchyma [7,8]. SjS is an autoimmune disease characterized by lymphocyte infiltration into exocrine glands such as the salivary glands and lacrimal glands, causing specific damage to these glands. USG findings are characterized by multiple small anechoic areas that frequently contain small hyperechoic spots within the inhomogeneous salivary parenchyma [7,8,9]. This appearance can be observed in both the SMG and parotid gland. These abnormal appearances of the SMGs may overlap depending on their stage among healthy individuals, patients with obstructive adenitis, and patients with SjS. In addition, the interpretation of images depends on the examiner’s experience level. Therefore, achieving a correct diagnosis is often difficult, especially for inexperienced observers.
Advanced deep learning systems have recently been developed, and their usefulness has been reported in the field of diagnostic imaging [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. A deep learning system is a machine learning method used in artificial intelligence that allows a computer to learn tasks in the same way as humans. It is based on a neural network, which is a system that imitates the neurons in the human brain. We previously reported the diagnostic performance of a deep learning system for the differentiation of patients with and without SjS who complained of xerostomia using submandibular USG images [17]. Although our study revealed high diagnostic ability, inflamed SMGs can be present in individuals both with and without SjS, and this must be taken into account. Therefore, to ensure the effective use of deep learning systems for screening of individuals with a wide spectrum of clinical conditions, the performance of these systems should be investigated using individuals with definitively verified healthy, flow-obstructed, and SjS-affected glands.
The present study was performed to evaluate the diagnostic performance of deep learning systems using USG images that depict the parenchymal changes of SMGs affected by sialolith-induced obstructive sialoadenitis, SjS-affected SMGs, and normal SMGs.

2. Materials and Methods

The study design was approved by the Ethics Committee of Aichi Gakuin University, Japan (approval number 496) and was planned according to the ethical standards of the Helsinki Declaration.

2.1. Subjects

The subjects were retrospectively selected from an image database of patients who visited our institution from June 2010 to October 2019. The study included 50 USG images of 50 patients (26 men, 24 women; average age, 46.3 years) with SMG obstructive sialoadenitis confirmed by the presence of sialoliths using CT with inflammatory symptoms such as swelling and tenderness (obstructive sialoadenitis group) and 50 USG images of 28 patients (1 man, 27 women; average age, 66.0 years) with a confirmed diagnosis of SjS according to both the Japanese criteria [28] and the American–European Consensus Group criteria [29] (SjS group) (Figure 1a,b). The diagnosis of SjS was based on a blood test, sialography, and a Saxon test; in some cases, a biopsy was also performed if the above tests were inconclusive. Images of the affected side were obtained for patients with sialolithiasis, and images of both sides were obtained for patients with SjS. Among the obtained SjS images, six poor-quality images were excluded. USG scans were also obtained from 50 control subjects (26 men, 24 women; average age, 59.0 years) with no SMG abnormalities who had presented with other diseases, such as cervical lymph node metastasis of oral cancer or submandibular lymphadenitis (Figure 1c). The control subjects were randomly selected from the imaging database of our hospital retrospectively. The 150 images were randomly divided into training and test groups for the deep learning process. The training group comprised 40 images from the obstructive sialoadenitis group, 40 images from the SjS group, and 40 images from the control group. The test group comprised 10 images from the obstructive sialoadenitis group, 10 images from the SjS group, and 10 images from the control group.

2.2. USG Protocol

All USG images of the SMGs were taken in B-mode using a Logiq E9 system (GE Healthcare, Tokyo, Japan) with a center frequency of 12 MHz. The patient was placed in the supine position with his or her neck extended and face directed away from the examination site, and the scan was performed with the sagittal plane parallel to the inferior border of the mandible. The probe was oriented so that the SMG was located in the center of the region of interest and the mylohyoid muscle was located in the anterior-lower part of the SMG (Figure 2). Images of the entire SMG were saved as multiple still images in our hospital imaging database.

2.3. Imaging Data

USG images were downloaded from the hospital imaging database in Digital Imaging and Communications in Medicine (DICOM) format. In this study, we focused on parenchymal changes; therefore, the images in the obstructive sialoadenitis group were selected from among those without a sialolith in the region of interest. These USG images were selected by one radiologist (C.K.). The images in the SjS group were selected by one radiologist (Y.K.). A single radiologist (Y.K.) then converted the USG images from DICOM format to Portable Network Graphics (PNG) format. A five-fold cross-validation procedure was used to train the deep learning model for image classification [21,25]. The data were randomly split into five groups. One group was used as a testing set, and the residual data were used as training and validation samples. The validation samples were randomly assigned 25% from the training samples. We ensured that the training and testing data did not contain samples from the same image or the same patients while maintaining a balanced number of samples in each group. All datasets were created by one radiologist (Y.K).
We performed data augmentation on the training data [10,17,25]. Data augmentation is a frequently used technique for deep learning implementation with a small number of clinical cases and, in this technique, the number of data items is synthetically increased by altering the brightness, contrast, rotation, and sharpness of the images. In total, 2000 augmented images from the obstructive sialoadenitis group, 2000 augmented images from the SjS group, and 2000 augmented images from the control group were included in the analysis.

2.4. Diagnostic Performance of the Deep Learning System

The deep learning process was implemented on a NVIDIA GeForce GTX GPU workstation (Nvidia Corp., Santa Clara, CA, USA) with 11 GB of GPU and 128 GB of memory. The deep learning process was performed using VGG16 architecture that was pretrained using the ImageNet dataset for transfer learning. The VGG16 architecture contained 16 layers, which consisted of 13 convolutional layers and 3 fully connected layers. We created a model that collectively identified the obstructive sialoadenitis group, SjS group, and control group and performed the following procedure for a model. The training and validation processes were conducted for 30 epochs until sufficient learning rates were obtained. The automated selection method was repeated five times, resulting in five learning models. Each test data item was then input into each learning model, and whether each image represented a correct or incorrect response was determined with its probability. The probability was automatically calculated by the deep learning machine for each image. After this process, the results of the five learning models were summarized and the accuracy was calculated. We performed this process twice and summed the results to calculate the final accuracy.
To compare the diagnostic performance of the deep learning system, two experienced radiologists (Radiologists A and B), each of whom had more than 15 years of experience, evaluated the same images (50 obstructive sialoadenitis, 50 SjS, and 50 control images) after the calibration using 15 images (5 obstructive sialoadenitis, 5 SjS, and 5 control images) selected randomly from the training data sets before actual evaluations. For assessment of intraobserver agreement, the radiologists performed two evaluations at an interval of at least 1 month.

2.5. Statistical Analysis

Intraobserver and interobserver agreement was assessed with κ values using SPSS statistical software version 27 (IBM Corp., Armonk, NY, USA). A κ value of < 0.20 indicated poor agreement, 0.21 to 0.40 indicated fair agreement, 0.41 to 0.60 indicated moderate agreement, 0.61 to 0.80 indicated good agreement, and 0.81 to 1.00 indicated excellent agreement.

3. Results

Table 1 summarizes the results obtained by the deep learning system and those obtained by the radiologists. The sensitivity of the deep learning system in the obstructive sialoadenitis group, SjS group, and control group was 55.0%, 83.0%, and 73.0%, respectively, and the total accuracy was 70.3%. The sensitivity of the two radiologists in the obstructive sialoadenitis group, SjS group, and control group was 64.0%, 72.0%, and 86.0%, respectively, and the total accuracy was 74.0%.
The intraobserver agreement rates were good for both radiologists (κ = 0.64, 0.69) (Table 2). The interobserver agreement rates were moderate (first: κ = 0.60, second: κ = 0.53). The intermodel agreement rate was good (κ = 0.72).

4. Discussion

Salivary gland inflammation is caused by a variety of factors and is the most common of the major salivary gland lesions. USG is important for the diagnosis of salivary gland lesions, and its usefulness has been proven [3,4,5,6,7,8,9]. However, diagnosis via USG images is difficult, and its accuracy depends on experience. Many studies on imaging diagnosis using deep learning systems have been reported in recent years, proving that deep learning systems are effective in supporting image diagnostics. USG imaging has also been performed in various fields and is reportedly as accurate as radiologists’ assessments; however, only a few such reports have focused on the oral and maxillofacial area.
In this study, the accuracy of the deep learning system and that of the radiologists was almost identical, but it was not very high (about 70%). However, the sensitivity in each group was distinctive: the deep learning system showed higher sensitivity in the SjS group, the radiologists showed higher sensitivity in the control group, and both the deep learning system and radiologists showed low sensitivity in the obstructive sialoadenitis group. We previously evaluated the diagnostic performance of a deep learning system for the detection of SjS in USG images and found that the accuracy, sensitivity, and specificity of the deep learning system for the SMGs were 84.0%, 81.0%, and 87.0%, respectively [17]. The sensitivity of the radiologists in the present study was lower than that in the previous study. This presumably occurred because the present study involved three groups, including the obstructive sialoadenitis group. Therefore, USG images of the SMGs in patients with obstructive sialoadenitis and SjS may be similar depending on the stage of the disease, and the two conditions should be carefully distinguished from each other.
In the control group, the sensitivity of the radiologists tended to be higher than that of the deep learning system. Most radiologists start their training by first understanding normal images. Imaging diagnosis is then based on the normal image to determine whether an abnormality is present and to finally make a diagnosis. Therefore, an experienced radiologist is considered to be a specialist in normal images.
The sensitivity in the obstructive sialoadenitis group was low for both the deep learning system and radiologists and were the main reason for the low total accuracy. The USG findings of chronic sialadenitis vary with the stage of the disease: some show enlarged glands and dilated conduits, while others show multiple hypoechoic areas without enlarged glands. [3,8]. These findings suggest that obstructive sialoadenitis should always be included as a differential diagnosis because it presents with various aspects ranging from normal to abnormal images depending on its stage.
Regarding the difference in sensitivity between the deep learning system and the radiologists, the deep learning system learns from the images input to the neural network and then attains a diagnosis. The neural network is a black box, and its contents are therefore unknown; however, it might use some information different from that used by humans to make a diagnosis. The deep leaning system can be used as a second set of eyes for identification of such conditions, and in the future it may become more accurate with increased data-sets and better algorithms.
In the present study, the intraobserver agreement rates for both radiologists were good, while the interobserver agreement rate was moderate. The radiologists calibrated before evaluation, but if there were similar images, they are likely to set their own criteria and make decisions. As a result, it is considered that there was a gap between the observers and the interobserver agreement became worse. The agreement rate for the deep learning system was good and attained the highest value. Observer agreement rates are important for screening a wide range of subjects. The results of this study suggest that automatic diagnosis using a deep learning system may be useful. However, further improvement of the agreement rate is necessary, and this might be achieved by increasing the number of cases in future studies.
The present study had several limitations. First, we only used B-mode images and did not use sonoelastography images, Doppler images, or dynamic images. As sonoelastography would be potentially useful for the diagnosis of chronic inflammatory conditions of the major salivary glands as reported by others [4,5], its performance using a deep learning system should be verified in future studies together with Doppler mode sonography and dynamic images. Second, we did not include cysts or tumors in the evaluation of this study. Cysts and tumors often develop in the SMG, and USG is one of the most effective methods of examination. However, these lesions are easy to detect because they are not global changes in the gland parenchyma. Future studies will need to distinguish between cysts and tumors and between benign and malignant tumors. Finally, the number of patients was small. We compensated for this disadvantage by using data augmentation techniques and amplifying the images used for the training data; however, it is important to use abundant original data to improve the accuracy of the deep learning system. Given that there is a limit to the number of cases in one facility, collaboration with multiple facilities will be necessary.

5. Conclusions

This study revealed that the deep learning system was more sensitive than experienced radiologists in diagnosing SjS in USG images of two case groups with SMG inflammation and a group of healthy subjects. Further studies on deep learning systems with ultrasonography would be beneficial in improving the accuracy in identification of salivary gland inflammation.

Author Contributions

Conceptualization, Y.K. and E.A.; methodology, Y.K., Y.A. and E.A; software, Y.K.; validation, Y.K., C.K., Y.A., M.N. and E.A.; formal analysis, Y.K.; investigation, Y.K. and C.K.; resources, Y.K.; data curation, Y.K. and C.K; writing—original draft preparation, Y.K.; writing—review and editing, Y.K. and E.A.; visualization, Y.K.; supervision, Y.K. and E.A.; project administration, Y.K.; funding acquisition, Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by JSPS KAKENHI Grant Number 18K17184 (Grant-in-Aid for Young Scientists).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of Aichi Gakuin University, Japan (approval number 496).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. Names and exact data of the participants of the study may not be available owing to patient confidentiality and privacy policy.

Acknowledgments

We thank Angela Morben, DVM, ELS, from Edanz for editing a draft of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krishnamurthy, S.; Vasudeva, S.B.; Vijayasarathy, S. Salivary gland disorders: A comprehensive review. World J. Stomatol. 2015, 4, 56–71. [Google Scholar] [CrossRef]
  2. Rzymska-Grala, I.; Stopa, Z.; Grala, B.; Gołębiowski, M.; Wanyura, H.; Zuchowska, A.; Sawicka, M.; Zmorzyński, M. Sali-vary gland calculi—Contemporary methods of imaging. Pol. J. Radiol. 2010, 75, 25–37. [Google Scholar]
  3. Gandage, S.G.; Kachewar, S.G. An Imaging Panorama of Salivary Gland Lesions as seen on High Resolution Ultrasound. J. Clin. Diagn. Res. 2014, 8, RC01–RC13. [Google Scholar] [CrossRef]
  4. Elbeblawy, Y.M.; Eshaq Amer Mohamed, M. Strain and shear wave ultrasound elastography in evaluation of chronic inflammatory disorders of major salivary glands. Dentomaxillofac. Radiol. 2020, 49, 20190225. [Google Scholar] [CrossRef] [PubMed]
  5. Szyfter, W.; Wierzbicka, M.; Kałużny, J.; Ruchała, M.; Stajgis, M.; Kopeć, T. Sonoelastography—A Useful Adjunct for Parotid Gland Ultrasound Assessment in Patients Suffering from Chronic Inflammation. Med Sci. Monit. 2014, 20, 2311–2317. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Badea, A.F.; Lupsor Platon, M.; Crisan, M.; Cattani, C.; Badea, I.; Pierro, G.; Sannino, G.; Baciut, G. Fractal analysis of elasto-graphic images for automatic detection of diffuse diseases of salivary glands: Preliminary results. Comput. Math Methods Med. 2013, 2013, 347238. [Google Scholar] [CrossRef] [Green Version]
  7. Zajkowski, P.; Ochal-Choińska, A. Standards for the assessment of salivary glands—An update. J. Ultrason. 2016, 16, 175–190. [Google Scholar] [CrossRef]
  8. Bialek, E.J.; Jakubowski, W.; Zajkowski, P.; Szopinski, K.T.; Osmolski, A. US of the major salivary glands: Anatomy and spa-tial relationships, pathologic conditions, and pitfalls. Radiographics 2006, 26, 745–763. [Google Scholar] [CrossRef] [Green Version]
  9. Hocevar, A.; Ambrozic, A.; Rozman, B.; Kveder, T.; Tomsic, M. Ultrasonographic changes of major salivary glands in pri-mary Sjögren’s syndrome. Diagnostic value of a novel scoring system. Rheumatology 2005, 44, 768–772. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Ariji, Y.; Fukuda, M.; Kise, Y.; Nozawa, M.; Yanashita, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Contrast-enhanced computed tomography image assessment of cervical lymph node metastasis in patients with oral cancer by using a deep learning system of artificial intelligence. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2019, 127, 458–463. [Google Scholar] [CrossRef]
  11. Ariji, Y.; Sugita, Y.; Nagao, T.; Nakayama, A.; Fukuda, M.; Kise, Y.; Nozawa, M.; Nishiyama, M.; Katumata, A.; Ariji, E. CT evaluation of extranodal extension of cervical lymph node metastases in patients with oral squamous cell carcinoma using deep learning classification. Oral Radiol. 2019, 36, 148–155. [Google Scholar] [CrossRef] [PubMed]
  12. Kise, Y.; Ikeda, H.; Fujii, T.; Fukuda, M.; Ariji, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Preliminary study on the application of deep learning system to diagnosis of Sjögren’s syndrome on CT images. Dentomaxillofac. Radiol. 2019, 48, 20190019. [Google Scholar] [CrossRef] [PubMed]
  13. Choi, K.J.; Jang, J.K.; Lee, S.S.; Sung, Y.S.; Shim, W.H.; Kim, H.S.; Yun, J.; Choi, J.-Y.; Lee, Y.; Kang, B.-K.; et al. Development and Validation of a Deep Learning System for Staging Liver Fibrosis by Using Contrast Agent–enhanced CT Images in the Liver. Radiology 2018, 289, 688–697. [Google Scholar] [CrossRef] [PubMed]
  14. Walsh, S.L.F.; Calandriello, L.; Silva, M.; Sverzellati, N. Deep learning for classifying fibrotic lung disease on high-resolution computed tomography: A case-cohort study. Lancet Respir. Med. 2018, 6, 837–845. [Google Scholar] [CrossRef]
  15. Song, Q.; Zhao, L.; Luo, X.; Dou, X. Using Deep Learning for Classification of Lung Nodules on Computed Tomography Images. J. Heal. Eng. 2017, 2017, 8314740. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Gao, X.W.; Hui, R.; Tian, Z. Classification of CT brain images based on deep learning networks. Comput. Methods Programs Biomed. 2016, 138, 49–56. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Kise, Y.; Shimizu, M.; Ikeda, H.; Fujii, T.; Kuwada, C.; Nishiyama, M.; Funakoshi, T.; Ariji, Y.; Fujita, H.; Katsumata, A.; et al. Usefulness of a deep learning system for diagnosing Sjögren’s syndrome using ultrasonography images. Dentomaxillofac. Radiol. 2020, 49, 20190348. [Google Scholar] [CrossRef] [PubMed]
  18. Becker, A.S.; Mueller, M.; Stoffel, E.; Marcon, M.; Ghafoor, S.; Boss, A. Classification of breast cancer from ultrasound imaging using a generic deep learning analysis software: A pilot study. Br. J. Radiol. 2017, 91, 20170576. [Google Scholar] [CrossRef]
  19. Choi, J.S.; Han, B.-K.; Ko, E.Y.; Bae, J.M.; Ko, E.Y.; Song, S.H.; Kwon, M.-R.; Shin, J.H.; Hahn, S.Y. Effect of a Deep Learning Framework-Based Computer-Aided Diagnosis System on the Diagnostic Performance of Radiologists in Differentiating between Malignant and Benign Masses on Breast Ultrasonography. Korean J. Radiol. 2019, 20, 749–758. [Google Scholar] [CrossRef]
  20. Fujioka, T.; Kubota, K.; Mori, M.; Kikuchi, Y.; Katsuta, L.; Kasahara, M.; Oda, G.; Ishiba, T.; Nakagawa, T.; Tateishi, U. Distinction between benign and malignant breast masses at breast ultrasound using deep learning method with convolutional neural network. Jpn. J. Radiol. 2019, 37, 466–472. [Google Scholar] [CrossRef]
  21. Stoffel, E.; Becker, A.S.; Wurnig, M.C.; Marcon, M.; Ghafoor, S.; Berger, N.; Boss, A. Distinction between phyllodes tu-mor and fibroadenoma in breast ultrasound using deep learning image analysis. Eur. J. Radiol. Open 2018, 5, 165–170. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Zheng, X.; Yao, Z.; Huang, Y.; Yu, Y.; Wang, Y.; Liu, Y.; Mao, R.; Li, F.; Xiao, Y.; Wang, Y.; et al. Deep learning radiomics can predict axillary lymph node status in early-stage breast cancer. Nat. Commun. 2020, 11, 1236. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Nguyen, D.T.; Kang, J.K.; Pham, T.D.; Batchuluun, G.; Park, K.R. Ultrasound Image-Based Diagnosis of Malignant Thyroid Nodule Using Artificial Intelligence. Sensors 2020, 20, 1822. [Google Scholar] [CrossRef] [Green Version]
  24. Sun, Q.; Lin, X.; Zhao, Y.; Li, L.; Yan, K.; Liang, D.; Sun, D.; Li, Z.-C. Deep Learning vs. Radiomics for Predicting Axillary Lymph Node Metastasis of Breast Cancer Using Ultrasound Images: Don’t Forget the Peritumoral Region. Front. Oncol. 2020, 10, 53. [Google Scholar] [CrossRef] [Green Version]
  25. Hiraiwa, T.; Ariji, Y.; Fukuda, M.; Kise, Y.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. A deep-learning artificial intelli-gence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac. Radiol. 2019, 48, 20180218. [Google Scholar] [CrossRef]
  26. Murata, M.; Ariji, Y.; Ohashi, Y.; Kawai, T.; Fukuda, M.; Funakoshi, T.; Kise, Y.; Nozawa, M.; Katsumata, A.; Fujita, H.; et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol. 2019, 35, 301–307. [Google Scholar] [CrossRef]
  27. Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Muramatsu, C.; Fukuda, M.; Kise, Y.; Nozawa, M.; Kuwada, C.; Fujita, H.; Katsumata, A.; et al. Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2019, 128, 424–430. [Google Scholar] [CrossRef]
  28. Fujibayashi, T.; Sugai, S.; Miyasaka, N.; Hayashi, Y.; Tsubota, K. Revised Japanese criteria for Sjögren’s syndrome (1999): Availability and validity. Mod. Rheumatol. 2004, 14, 425–434. [Google Scholar] [CrossRef] [PubMed]
  29. Vitali, C.; Bombardieri, S.; Jonsson, R.; Moutsopoulos, H.M.; Alexander, E.L.; Carsons, S.E.; Daniels, T.E.; Fox, P.C.; Fox, R.I.; Kassan, S.S.; et al. Classification criteria for Sjogren’s syndrome: A revised version of the European criteria proposed by the American-European Consensus Group. Ann. Rheum. Dis. 2002, 61, 554–558. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Examples of ultrasound images. (a) A patient in the obstructive sialoadenitis group shows inhomogeneous parenchyma and well-defined margins. (b) A patient in the Sjögren’s syndrome group shows inhomogeneous parenchyma characterized by multiple diffuse anechoic regions and ill-defined margins. (c) A patient in the control group shows homogeneous parenchyma and well-defined margins.
Figure 1. Examples of ultrasound images. (a) A patient in the obstructive sialoadenitis group shows inhomogeneous parenchyma and well-defined margins. (b) A patient in the Sjögren’s syndrome group shows inhomogeneous parenchyma characterized by multiple diffuse anechoic regions and ill-defined margins. (c) A patient in the control group shows homogeneous parenchyma and well-defined margins.
Jcm 10 04508 g001
Figure 2. Illustration of right submandibular gland imaging. The submandibular gland was located in the center of the region of interest, and the mylohyoid muscle was located in the anterior-lower part of the submandibular gland (white arrow). MM, mylohyoid muscle; SMG, submandibular gland.
Figure 2. Illustration of right submandibular gland imaging. The submandibular gland was located in the center of the region of interest, and the mylohyoid muscle was located in the anterior-lower part of the submandibular gland (white arrow). MM, mylohyoid muscle; SMG, submandibular gland.
Jcm 10 04508 g002
Table 1. Results obtained by deep learning system and radiologists.
Table 1. Results obtained by deep learning system and radiologists.
Obstructive SialoadenitisSjSControlPPV (%)
DLObstructive sialoadenitis55122063.2
SjS2983769.7
Control1657377.7
Sensitivity (%)55.083.073.070.3 (accuracy)
RadiologistsObstructive sialoadenitis64221464.0
SjS2672172.7
Control1068684.3
Sensitivity (%)64.072.086.074.0 (accuracy)
DL, deep learning system; SjS, Sjögren’s syndrome; PPV, positive predictive value.
Table 2. Interobserver and intraobserver/model agreement.
Table 2. Interobserver and intraobserver/model agreement.
Intraobserver Agreement
Radiologist A0.64
Radiologist B0.69
Mean0.66Good
Interobserver/model agreement
Radiologist A vs. B (first)0.60
Radiologist A vs. B (second)0.53
Mean0.57Moderate
DL (first) vs. DL (second)0.72Good
DL, deep learning system.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kise, Y.; Kuwada, C.; Ariji, Y.; Naitoh, M.; Ariji, E. Preliminary Study on the Diagnostic Performance of a Deep Learning System for Submandibular Gland Inflammation Using Ultrasonography Images. J. Clin. Med. 2021, 10, 4508. https://doi.org/10.3390/jcm10194508

AMA Style

Kise Y, Kuwada C, Ariji Y, Naitoh M, Ariji E. Preliminary Study on the Diagnostic Performance of a Deep Learning System for Submandibular Gland Inflammation Using Ultrasonography Images. Journal of Clinical Medicine. 2021; 10(19):4508. https://doi.org/10.3390/jcm10194508

Chicago/Turabian Style

Kise, Yoshitaka, Chiaki Kuwada, Yoshiko Ariji, Munetaka Naitoh, and Eiichiro Ariji. 2021. "Preliminary Study on the Diagnostic Performance of a Deep Learning System for Submandibular Gland Inflammation Using Ultrasonography Images" Journal of Clinical Medicine 10, no. 19: 4508. https://doi.org/10.3390/jcm10194508

APA Style

Kise, Y., Kuwada, C., Ariji, Y., Naitoh, M., & Ariji, E. (2021). Preliminary Study on the Diagnostic Performance of a Deep Learning System for Submandibular Gland Inflammation Using Ultrasonography Images. Journal of Clinical Medicine, 10(19), 4508. https://doi.org/10.3390/jcm10194508

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop