Next Article in Journal
The Multi-Agentization of a Dual-Arm Nursing Robot Based on Large Language Models
Previous Article in Journal
Bilayer Type I Atelocollagen Scaffolds for In Vivo Regeneration of Articular Cartilage Defects
Previous Article in Special Issue
2D Pose Estimation vs. Inertial Measurement Unit-Based Motion Capture in Ergonomics: Assessing Postural Risk in Dental Assistants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Three-Stage Fusion Neural Network for Predicting the Risk of Root Fracture—A Pilot Study

1
Department of Electronic Engineering, National Formosa University, Yunlin 632301, Taiwan
2
Smart Machinery and Intelligent Manufacturing Research Center, National Formosa University, Yunlin 632301, Taiwan
3
Department of Multimedia Design, National Formosa University, Yunlin 632301, Taiwan
4
Department of Stomatology, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
5
Department of Electrical Engineering, National Formosa University, Yunlin 632301, Taiwan
*
Authors to whom correspondence should be addressed.
Bioengineering 2025, 12(5), 447; https://doi.org/10.3390/bioengineering12050447
Submission received: 28 January 2025 / Revised: 11 April 2025 / Accepted: 11 April 2025 / Published: 24 April 2025

Abstract

:
Predicting the risk of root fractures following root canal therapy requires diagnosis of the dental history and status of patients. However, dental history is a kind of categorical data type that is not easy to combine with numerical data to obtain good performance in deep learning. The accuracy of support vector machine (SVM) and artificial neural networks (ANNs) is 71.7% and 73.1%, respectively. In this study, a three-stage fusion neural network (TSFNN) is proposed to improve the multiple types of clinical data in the dental field based on ANNs. Clinical data were obtained from 145 teeth, comprising 97 fractured teeth and 48 nonfractured teeth. Each dataset contained 17 items, which were divided into 10 categorical items and 7 numerical items. TSFNN combines numerical and categorical NN with batch normalization and embedding layer techniques and can produce the accuracy of 82.1% and a 19.1% improvement in F1-score. It shows impressive performance in predicting the risk of root fracture. Furthermore, due to the limited amount of clinical data, it is believed that such a pilot study can effectively improve the results when the amount of clinical data is insufficient.

1. Introduction

According to the research, root fracture following root canal therapy has an incidence of about 30% [1]. Most of root fractures are complete or incomplete longitudinal fractures along the long axis of the tooth, vertical root fracture (VRF) is called [2]. Concerning the relationship between root canal therapy and VRF, Sedgley indicates that shear strength, toughness and compressive strength of teeth decreases about 3.5% after root canal therapy [3]. Reeh [4], in addition indicates the loss of tooth stiffness was 5% after the endodontic procedures. Barreto [5] found that apical pressure filling techniques are factors causing VRF. VRF occurred in 13.3% of cases by lateral compaction and 33.3% of cases by Tagger’s hybrid. Wilcox [6] indicates that root canal enlargement may also induce VRF. Saw [7] mentioned the influence of different obturation techniques on root strains could cause VRF. Many researchers propose a variety of possible causes of VRF after root canal therapy. However, it was hard to predict the risk of VRF by integrating these different causal factors until artificial intelligence (AI) became significantly advanced. Sanjeev B. Khanagar [8] reviewed 43 research papers from the dental field from the past to 2021. It showed the use of AI in dental research increases from one paper in 2008 to 20 in 2019. It includes detection and diagnosis of dental caries, proximal dental caries, VRF, apical lesions and so on [9,10,11,12,13,14,15,16,17,18,19,20].
Most cases of AI medical applications mention deep learning used in their study [21,22,23]. In most cases, deep learning (DL) is achieved by neural networks (NNs). These neural networks include artificial neural networks (ANNs) [10,11,12], convolutional neural networks [13,14,15,16,17,18], Bayesian neural networks (BNNs) [19] and probabilistic neural networks (PNNs) [20]. ANN is the kernel architecture in this study.
Compared to previous works, it was not until 2023 that Chang [12] integrated all practical factors to predict the risk of VRF by using deep learning neural network (DLNN). Utilizing the same variables used to predict VRF with these different approaches is proposed in this study. The data used in this study only contain 145 teeth with 17 variables. The data were collected over a period of six years in Taiwan. Taiwan has a population of approximately 23 million. Limited clinical data size is a common situation faced by medical research in Taiwan. In such situation, support vector machine (SVM) of machine learning (ML) may be similar to original ANNs. The original ANNs could be used as shown in Figure 1. The clinical data are classified into numerical and categorical data normalized separately and then input into to ANNs. In this way, the accuracy of ANNs is 73.1%, a little better than SVM which is 71.7%.
In this paper, a three-stage fusion neural network is proposed to improve the original ANNs. It could raise the accuracy from 73.1% to 82.1%. It does not only show a modification of approach but also of the architectures.

2. Materials and Methods

2.1. Data Collections

The clinical data used in this study were collected from January 2015 to June 2021 at the Department of Stomatology, Ditmanson Medical Foundation Chia-Yi Christian Hospital. In total, conforming clinical data were obtained from 145 teeth, comprising 97 fractured teeth and 48 nonfractured teeth.

2.2. Datasets

Each dataset contained 17 items, as shown in Table 1. These 17 items describe the dental history and status of patients. These 17 items were divided into numerical and categorical items. There are 10 categorical items and 7 numerical items.

2.2.1. Categorical Data

As shown in Table 1, for example, items no 1 to 6 are questions of binary options. Items no. 7 to 10 are multiple options. These options do not accurately describe the measurable relationship between them. For example, sex includes male and female. There is no measurable relationship concerning teeth between male and female. The next example is that of no. 4, which records the preoperative pain. In this case, it is not possible to describe how much pain was experienced. As another example, there are a possible 6 positions of item no 7. However, there are no specific relationship between them in order. Items no. 1 to 10 are defined as categorical data.

2.2.2. Numerical Data

In contrast, objects that can be measured are considered as numerical data. For example, the age at the time of treatment of case no 11 is a continuous and measured data. Remaining root canal wall thickness of case no 16 is a numerical data. In the same way, no 11 to 17 are considered as numerical data.
As a result, the options are numbers without comparable and continuous relationship between them could be considered as categorical data; in contrary, they are numerical data.

2.3. Stage 1 Numerical Neural Network (NNN)

2.3.1. Architecture of Numerical Neural Network

The first stage is to separate the numerical data into min-max normalization and then into a neural network as shown in Figure 2. In our experiments, items no 11 to 17 were in parallel into min-max normalization model to transform data into a uniform range from 0 to 1.

2.3.2. Min-Max Normalization for Numerical Items

Min-max normalization is used to map data into a uniform range to achieve the purpose of normalization. The output area is usually 0 to 1 or −1 to 1. Values 0 to 1 were used in our experiments. For each item (no 11 to 17), the minimal and maximal value of each item are used to map the specific item value to 0 to 1. The equation is shown as Equation (1)
y = x/(max(x) − min(x)),
where x is the original value of items, max(x) and min(x) are the maximal and minimal value of each item. The normalized value y is obtained to as the input of DLNN for the 7 numerical items in our experiments.

2.4. Stage 2 Categorical Neural Network (CNN)

2.4.1. Ordinal Encoding for Categorical Items

As shown in Table 2, 10 categorical items which are items no 1 to 10 are encoded to 1 to 30 by ordinal encoding. In this way, it gives categorical data ordinal relationship between them. Categorical data become a continuous data like numerical data.

2.4.2. Embedding

Categorical data are a kind of natural language. Embeddings have been successfully used in Natural Language Processing (NLP). This has the same results in categorical data. It encodes categorical data into a low-dimensional vector representation, which is easily integrable in DLNN.
As shown in Figure 3a, a one hidden layer neural network is used for pre-training with samples to obtain the transform parameters. The output nodes are removed like in Figure 3b to obtain an output vector with 20 nodes. This network is an embedding layer to transform the 10 categorical data into a 20-dimensional vector. In pre-train, 20 is the minimum number of hidden nodes able to achieve a good identification in this case.

2.4.3. Categorical Neural Network

The categorical neural network is structured as Figure 4. Categorical data pass to ordinal encoding and then to neural network. The embedding layer become an input part of this network. It encodes 10 ordinal codes to a 20-dimensional vector.

2.5. Three Stage Fusion Neural Networks

Three-stage fusion neural networks (TSFNN) are made by the combination of NNN and CNN with batch normalization. The architecture of TSFNN is shown in Figure 5. The outputs of NNN and CNN are the inputs of the fusion network.

2.6. Batch Normalization

Batch normalization is used to normalize the activations of a layer within a mini-batch during the training of TSFNN. The mean and variance of the activations are calculated for each node within the mini-batch. The activations are normalized by subtracting the mini-batch mean and divided by the mini-batch standard deviation. Parameters are scaled and shifted to the optimal activation distribution. Applying batch normalization in training improves the higher performance of TSFNN.

3. Results

The performance of components proposed by our research will be demonstrated in this section. The components include ordinal encoding with embedding layer, data fusion neural network and batch normalization.

3.1. Validation Methods

In our experiments, “leave one out cross validation” was used to verify the performance of techniques. It means that one sample is chosen for testing and the other samples are used for training. In this way, 144 samples are used as training samples and one sample is used as a test sample each time. It is run 145 times in total. The results of the following evaluation methods are obtained from the average value of 145 runs.

3.2. Evaluation Methods

There are four indexes to evaluate how a TSFNN works. Let fractured teeth be considered positive cases and nonfractured teeth negative cases. True positive number (TP) is the number detecting as positive in positive cases. False negative number (FN) is the number detecting as negative in positive cases. False positive number (FP) is the number detecting as positive in negative cases. True negative number (TN) is the number detecting as negative in negative cases. Accuracy is defined as Equation (2):
Accuracy = (TP + TN)/(TP + FP + TN + FN),
Precision is defined as Equation (3):
Precision = TP/(TP + FP),
Recall is defined as Equation (4):
Recall = TP/(TP + FN),
F1-score is a harmonic mean of precision and recall defined as Equation (5):
F1-score = 2/((1/Precision) + (1/Recall)),
Accuracy shows the detection power for both positive and negative cases; Precision shows the detection accuracy for positivity; Recall shows the detection power in the positive cases; F1-score is an objective result to evaluate total performance.

3.3. Performance of Batch Normalization

Table 3 shows the comparison of performance of batch normalization. It is impressive that recall increases the performance from 0.823 to 0.937 about 0.114. However, accuracy achieves half the improvement of recall. This means that batch normalization improves both the positive and negative detection power.

3.4. Performance of Embedding Layer

As shown in Table 4, the embedding layer is added to CNN, accuracy and recall gain significant are improved from 0.703 to 0.821 and 0.813 to 0.937 separately. Precision only gains a small improvement. This means that false-negatives are reduced significantly. As a result, this indicates that embedding layer improves true negative detection power for negative cases. It is noted that embedding layer only be applied in processing categorical data. This means that embedding layer is very helpful in processing categorical data.

3.5. Performance of Fusion Neural Networks

Figure 6 shows two architectures to carry out the comparison of performance. Figure 6a shows a two-way neural network combination of numerical and categorical data without fusion networks. Figure 6b shows the new architecture proposed in this paper. Both of two networks use a similar number of hidden nodes.
Table 5 shows TSFNN could improve two-way NN detection power by about 5%.

4. Discussion

This paper builds upon the previous work of Chang [12]. The same variables and same clinical data are used for the research. However, our study proposes a new three-stage data fusion approach and architectures to improve the performance of ANNs. This new approach is not only useful for ANNs but also other NNs. The desired purpose is to understand how to fuse different types of data and neural networks to improve performance. In further research, this architecture will be used to process different medical fields including medical image and signals. Furthermore, “leave one out cross validation” is a good validation method to compare several techniques in the same sample size and variables. Therefore, the experimental results show relative values different for techniques. As the amount of data increases, it is believed the results will also increase proportionally. Additionally, SVM is very sensitive to training data and mixing different types of data. In mixing numerical and categorical data, SVM cannot easily achieve high performance. However, as shown in Table 6, there are different benefits offered by SVM and ANNs. Accuracy and precision of ANNs are better than SVM However, Recall and F1-score of ANNs are worse than SVM. In summary, original ANNs have a similar performance compared to SVM. It is necessary to give more improvement to ANNs to highlight the benefits of neural networks compared to ML in big data. The critical contribution of this study is to show how the performance steadily and gradually improved from ANNs, to two-way ANNs, and then TSFNN.

5. Conclusions

Complication of clinical data for diagnosis is very common in medical applications. Medical doctors need to make decisions from a variety of clinical data. These clinical data may include numerical data and categorical data. Even medical image or signal data can be transferred into numerical or categorical data. Fusing such data into a suitable architecture for analysis is a key technique. Sometimes it does not work if data are all entered without carrying out any preprocessing or classifications. It is like “garbage in garbage out”. In such a situation, the accuracy of support vector machine (SVM) and artificial neural networks (ANNs) only achieve 71.7% and 73.1%, respectively.
In this paper, a three-stage fusion neural network is proposed to improve the multiple types of clinical data based on ANNs in the dental field. The target of our research is to predict the risk of root fracture following root canal therapy. Clinical data were obtained from 145 teeth, comprising 97 fractured teeth and 48 nonfractured teeth. Each dataset contained 17 items which describe the dental history and status of patients. They were divided into 10 categorical items and 7 numerical items.
Batch normalization was able to improve NN in F1-score by about 5.5%. Ordinal encoding categorical data with embedding layer could improve the F1-score by 9%. A fusion NN which fuse numerical and categorical NN could improve the F1-score by 4.6%. Combination of all approaches brought an overall improvement of 19.1% in F1-score. This proves three stages fusion neural network is very helpful for handling very complicated data in many medical fields. In all experiments of this study, “leave one out cross validation” was used to verify the performance of techniques. It is fair and trustworthy to compare different models given the limited amount of data. Additionally, it gives a solution that improve the results even in the case of insufficient clinical data.

Author Contributions

Conceptualization, C.-S.L., Y.-M.K., H.-Y.H., C.-H.Y. and W.-T.C.; methodology, Y.-M.K.; software, T.-Y.S.; validation, T.-Y.S. and C.-H.Y.; formal analysis, T.-Y.S. and W.-T.C.; investigation, W.-T.C.; resources, H.-Y.H.; data curation, W.-T.C. and T.-Y.S.; writing—original draft preparation, C.-S.L., L.-Y.K. and T.-Y.S.; writing—review and editing, C.-S.L., Y.-M.K. and L.-Y.K.; visualization, C.-S.L. and L.-Y.K.; supervision, Y.-M.K. and L.-Y.K.; project administration, Y.-M.K. and W.-T.C.; funding acquisition, H.-Y.H. and W.-T.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science and Technology Council, R.O.C., grant numbers are 111-2221-E-150-033 and 112-2221-E-150-030 and supported by the Ditmanson Medical Foundation Chia-Yi Christian Hospital, Taiwan (Grant R111-27). The APC is paid by authors.

Institutional Review Board Statement

CYCH-IRB No.2021115.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are not publicly available due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TSFNNThree Stages Fusion Neural Network
NNNNumerical Neural Network
CNNCategorical Neural Network
FNNFusion Neural Network

References

  1. Touré, B.; Faye, B.; Kane, A.W.; Lo, C.M.; Niang, B.; Boucher, Y. Analysis of reasons for extraction of endodontically treated teeth: A prospective study. J. Endod. 2011, 37, 1512–1515. [Google Scholar] [CrossRef] [PubMed]
  2. Rivera, E.M.; Walton, R.E. Longitudinal tooth fractures: Findings that contribute to complex endodontic diagnoses. Endod. Top. 2007, 16, 82–111. [Google Scholar] [CrossRef]
  3. Sedgley, C.M.; Messer, H.H. Are endodontically treated teeth more brittle? J. Endod. 1992, 18, 332–335. [Google Scholar] [CrossRef] [PubMed]
  4. Reeh, E.S.; Messer, H.H.; Douglas, W.H. Reduction in tooth stiffness as a result of endodontic and restorative procedures. J. Endod. 1989, 15, 512–516. [Google Scholar] [CrossRef]
  5. Barreto, M.S.; Moraes, R.A.; Rosa, R.A.; Moreira, C.H.C.; Só, M.V.R.; Bier, C.A.S. Vertical root fractures and dentin defects: Effects of root canal preparation, filling, and mechanical cycling. J. Endod. 2012, 38, 1135–1139. [Google Scholar] [CrossRef]
  6. Wilcox, L.R.; Roskelley, C.; Sutton, T. The relationship of root canal enlargement to finger-spreader induced vertical root fracture. J. Endod. 1997, 23, 533–534. [Google Scholar] [CrossRef]
  7. Saw, L.H.; Messer, H.H. Root strains associated with different obturation techniques. J. Endod. 1995, 21, 314–320. [Google Scholar] [CrossRef] [PubMed]
  8. Khanagar, S.B.; Al-Ehaideb, A.; Maganur, P.C.; Vishwanathaiah, S.; Patil, S.; Baeshen, H.A.; Sarode, S.C.; Bhandi, S. Developments, application, and performance of artificial intelligence in dentistry–A systematic review. J. Dent. Sci. 2021, 16, 508–522. [Google Scholar] [CrossRef]
  9. Boreak, N. Effectiveness of artificial intelligence applications designed for endodontic diagnosis, decision-making, and prediction of prognosis: A systematic review. J. Contemp. Dent. Pract. 2020, 21, 926–934. [Google Scholar] [CrossRef]
  10. Saghiri, M.A.; Asgar, K.; Boukani, K.K.; Lotfi, M.; Aghili, H.; Delvarani, A.; Karamifar, K.; Saghiri, A.M.; Mehrvarzfar, P.; Garcia-Godoy, F. A new approach for locating the minor apical foramen using an artificial neural network. Int. Endod. J. 2012, 45, 257–265. [Google Scholar] [CrossRef]
  11. Mahmoud, Y.E.; Labib, S.S.; Mokhtar, H.M. Clinical prediction of teeth periapical lesion based on machine learning techniques. In Proceedings of the Second International Conference on Digital Information Processing, Data Mining, and Wireless Communications, Dubai, United Arab Emirates, 16–18 December 2015. [Google Scholar]
  12. Chang, W.T.; Huang, H.Y.; Lee, T.M.; Sung, T.Y.; Yang, C.H.; Kuo, Y.M. Predicting root fracture after root canal treatment and crown installation using deep learning. J. Dent. Sci. 2024, 19, 587–593. [Google Scholar] [CrossRef] [PubMed]
  13. Ekert, T.; Krois, J.; Meinhold, L.; Elhennawy, K.; Emara, R.; Golla, T.; Schwendicke, F. Deep learning for the radiographic detection of apical lesions. J. Endod. 2019, 45, 917–922. [Google Scholar] [CrossRef] [PubMed]
  14. Orhan, K.; Bayrakdar, I.S.; Ezhov, M.; Kravtsov, A.; Özyürek, T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int. Endod. J. 2020, 53, 680–689. [Google Scholar] [CrossRef]
  15. Fukuda, M.; Inamoto, K.; Shibata, N.; Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol. 2020, 36, 337–343. [Google Scholar] [CrossRef] [PubMed]
  16. Hatvani, J.; Horváth, A.; Michetti, J.; Basarab, A.; Kouamé, D.; Gyöngy, M. Deep learning-based super-resolution applied to dental computed tomography. IEEE Trans. Radiat. Plasma Med. Sci. 2018, 3, 120–128. [Google Scholar] [CrossRef]
  17. Hiraiwa, T.; Ariji, Y.; Fukuda, M.; Kise, Y.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofacial Radiol. 2019, 48, 20180218. [Google Scholar] [CrossRef]
  18. Chen, H.; Zhang, K.; Lyu, P.; Li, H.; Zhang, L.; Wu, J.; Lee, C.H. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci. Rep. 2019, 9, 3840. [Google Scholar] [CrossRef]
  19. Campo, L.; Aliaga, I.J.; De Paz, J.F.; García, A.E.; Bajo, J.; Villarubia, G.; Corchado, J.M. Retreatment predictions in odontology by means of CBR systems. Comput. Intell. Neurosci. 2016, 2016, 1–11. [Google Scholar] [CrossRef]
  20. Johari, M.; Esmaeili, F.; Andalib, A.; Garjani, S.; Saberkari, H. Detection of vertical root fractures in intact and endodontically treated premolar teeth by designing a probabilistic neural network: An ex vivo study. Dentomaxillofacial Radiol. 2017, 46, 20160107. [Google Scholar] [CrossRef]
  21. Kuwada, C.; Ariji, Y.; Fukuda, M.; Kise, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the maxillary incisor region on panoramic radiographs. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2020, 130, 464–469. [Google Scholar] [CrossRef]
  22. Pinto-Coelho, L. How artificial intelligence is shaping medical imaging technology: A survey of innovations and applications. Bioengineering 2023, 10, 1435. [Google Scholar] [CrossRef] [PubMed]
  23. Avesta, A.; Hossain, S.; Lin, M.; Aboian, M.; Krumholz, H.M.; Aneja, S. Comparing 3D, 2.5D, and 2D approaches to brain Image auto-segmentation. Bioengineering 2023, 10, 181. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Original ANNs.
Figure 1. Original ANNs.
Bioengineering 12 00447 g001
Figure 2. Numerical neural network.
Figure 2. Numerical neural network.
Bioengineering 12 00447 g002
Figure 3. The procedure of generating an embedding layer: (a) Pre-train embedding layer; (b) Embedding layer.
Figure 3. The procedure of generating an embedding layer: (a) Pre-train embedding layer; (b) Embedding layer.
Bioengineering 12 00447 g003
Figure 4. Categorical neural network.
Figure 4. Categorical neural network.
Bioengineering 12 00447 g004
Figure 5. The architecture of TSFNN.
Figure 5. The architecture of TSFNN.
Bioengineering 12 00447 g005
Figure 6. Comparison of two architectures: (a) a two-way neural network combination of numerical and categorical data; (b) a new architecture of data fusion neural network.
Figure 6. Comparison of two architectures: (a) a two-way neural network combination of numerical and categorical data; (b) a new architecture of data fusion neural network.
Bioengineering 12 00447 g006
Table 1. A clinical dataset.
Table 1. A clinical dataset.
NoItemsOptions
1sexMale, Female
2previous dental fracturesYes, No
3previous prosthesesYes, No
4preoperative painYes, No
5percussion painYes, No
6endodontical retreatmentYes, No
7tooth positionMaxillary anterior teeth
maxillary molars
maxillary premolar
Mandibular front teeth
Mandibular molars
Mandibular premolar
8posts placementNone
Para post
Casting post
Fiber post
Screw post
9abutment of removable denturesNone
orfixed partial dental prostheses
Abutment of removable dentures
fixed partial dental prosthesesBoth
10previous sapicoectomyNone
orPrevious sapicoectomy
root amputationRoot amputation
11the age at the time of treatmentnumber
12quantity of remaining tooth wallsnumber
13duration from completion of root canal treatment until the date of prosthetic installationnumber
14tooth wear conditionnumber
15periodontal conditionnumber
16remaining root canal wall thicknessnumber
17pericervical dentin thicknessnumber
Table 2. Ordinal encoding code of categorical items.
Table 2. Ordinal encoding code of categorical items.
NoItemsOptionsOrdinal Code
1sexMale1
Female2
2previous dental fracturesYes3
No4
3previous prosthesesYes5
No6
4preoperative painYes7
No8
5percussion painYes9
No10
6endodontical retreatmentYes11
No12
7tooth positionMaxillary anterior teeth13
maxillary molars14
maxillary premolar15
Mandibular front teeth16
Mandibular molars17
Mandibular premolar18
8posts placementNone19
Para post20
Casting post21
Fiber post22
Screw post23
9abutment of removable denturesNone24
orfixed partial dental prostheses25
Abutment of removable dentures26
fixed partial dental prosthesesBoth of them27
10previous sapicoectomyNone28
orPrvious sapicoectomy29
root amputationRoot amputation30
Table 3. Comparison of TSFNN with and without batch normalization.
Table 3. Comparison of TSFNN with and without batch normalization.
ArchitecturesAccuracyPrecisionRecallF1-Score
TSFNN0.7590.8180.8230.819
TSFNN with batch normalization0.8210.8220.9370.874
Improvement0.0620.0040.1140.055
Table 4. Comparison of ordinal encoding with and without embedding layer.
Table 4. Comparison of ordinal encoding with and without embedding layer.
ArchitecturesAccuracyPrecisionRecallF1-Score
Ordinal encoding only0.7030.7570.8130.784
Ordinal encoding with embedding layer0.8210.8220.9370.874
Improvement0.1180.0650.1240.090
Table 5. Comparison of regular NN and TSFNN.
Table 5. Comparison of regular NN and TSFNN.
ArchitecturesAccuracyPrecisionRecallF1-Score
Two-way NN0.7520.7800.8870.828
TSFNN0.8210.8220.9370.874
Improvement0.0690.0420.050.046
Table 6. Comparison of SVM, ANNs, two-way ANNs and TSFNN.
Table 6. Comparison of SVM, ANNs, two-way ANNs and TSFNN.
MethodsAccuracyPrecisionRecallF1-Score
SVM0.7170.7190.9470.817
ANNs0.7310.8010.7930.796
Two-way ANNs0.7520.7800.8870.828
TSFNN0.8210.8220.9370.874
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kuo, Y.-M.; Kuo, L.-Y.; Huang, H.-Y.; Sung, T.-Y.; Yang, C.-H.; Chang, W.-T.; Lo, C.-S. A Three-Stage Fusion Neural Network for Predicting the Risk of Root Fracture—A Pilot Study. Bioengineering 2025, 12, 447. https://doi.org/10.3390/bioengineering12050447

AMA Style

Kuo Y-M, Kuo L-Y, Huang H-Y, Sung T-Y, Yang C-H, Chang W-T, Lo C-S. A Three-Stage Fusion Neural Network for Predicting the Risk of Root Fracture—A Pilot Study. Bioengineering. 2025; 12(5):447. https://doi.org/10.3390/bioengineering12050447

Chicago/Turabian Style

Kuo, Yung-Ming, Liang-Yin Kuo, Hsun-Yu Huang, Tsen-Yu Sung, Chun-Hung Yang, Wan-Ting Chang, and Chien-Shun Lo. 2025. "A Three-Stage Fusion Neural Network for Predicting the Risk of Root Fracture—A Pilot Study" Bioengineering 12, no. 5: 447. https://doi.org/10.3390/bioengineering12050447

APA Style

Kuo, Y.-M., Kuo, L.-Y., Huang, H.-Y., Sung, T.-Y., Yang, C.-H., Chang, W.-T., & Lo, C.-S. (2025). A Three-Stage Fusion Neural Network for Predicting the Risk of Root Fracture—A Pilot Study. Bioengineering, 12(5), 447. https://doi.org/10.3390/bioengineering12050447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop