*3.3. Tooth Detection*

A deep convolutional neural network (DCNN) with an AlexNet architecture was employed by Miki et al. for classifying tooth types on dental cone-beam computed tomography (CT) images. In that study, the authors employed forty-two images to train the network and ten images to test it and obtained a relatively high accuracy (above 80%) [24].

A mask region-based convolutional neural network (Mask R-CNN) was employed by Jader et al. to obtain the profile of each tool, employing 1500 panoramic X-ray radiographies. The outcome metrics employed in this study were accuracy, F1-score, precision, recall and specificity, with values of 0.98, 0.88, 0.94, 0.84, and 0.99, respectively [23].

Faster regions with convolutional neural network features (faster R-CNN) in the TensorFlow tool package were used by Chen et al. to detect and number the teeth in dental periapical films [16]. Here, 800 images were employed as the training dataset, 200 as the test dataset and 250 as the validation dataset. The outcome metrics were recall and precision, which obtained 0.728 and 0.771, respectively. Chen et al. also employed a neural network to predict the missing teeth number.

Periapical images with faster-RCNN and region-based fully convolutional networks (R-FCN) were employed by Zhang et al. Here, 700 images were employed to train the network, 200 were employed to test and 100 to validate the network. The method proposed by Zhang et al. achieved a high precision close to 95.8% and a recall of 0.961 [20].

The efficiencies of a radial basis function neural network (RBFNN) and of a GAME neural network in predicting the age of the Czech population between three and 17 years were compared by Velemínská et al. This study employed a panoramic X-ray of 1393 individuals aged from three to 17 years. In this case, standard deviation was measured [25].

A total of 1352 panoramic images were employed by Tuzoff et al. to detect teeth using a Faster R-CNN architecture [3]. This study obtained a sensitivity of 0.9941 and a precision of 0.9945.

By employing a CNN architecture and the PyBrain library, Raith et al. classified teeth and obtained a performance of 0.93 [21].

One hundred dental panoramic radiographs were employed by Muramatsu et al. for an object detection network using a four-fold cross-validation method. The tooth detection sensitivity was 96.4% and the accuracy was 93.2% [28].

A database of 100 panoramic radiographs with an AlexNet architecture was employed by Oktay to detect teeth with an accuracy of over 0.92, depending on the type of tooth (molar, incisor, and premolars) [30].

#### *3.4. Caries Detection*

Two deep convolutional neural networks (CNNs), Resnet18 and Restext50, were applied by Schwendicke et al. to detect caries lesions in near-infrared light transillumination (NILT) images [10]. In this study, 226 extracted permanent human teeth (113 premolars and 113 molars) were employed. According to their results, the two models performed similarly in predicting the caries on tooth segments of the NILT images. The area under the curve (AUC), sensitivity and the specificity were evaluated with results of 0.74, 0.59, and 0.76, respectively.

A deep learning model was employed by Casalengo et al. for the automated detection and localization of dental lesions in 217 near-infrared transillumination images of upper and lower molars and premolars. Here, 185 images were used to train the network and 32 images were used to validate it. The results concluded an area under curve (AUC) of 85.6% for proximal lesions and an AUC of 83.6% for occlusal lesions [26].

A total of 3000 periapical radiographies were employed by Lee et al. to detect dental caries [13]. From the total dataset, 25.9% of the images were maxillary premolars, 25.6% were maxillary molars, 24.1% were mandibular premolars and 24.4% were mandibular molars. The authors implemented deep CNN algorithm weight factors. A pre-trained GoogleNet Inception v3 CNN network was used for preprocessing and the datasets were trained using transfer learning. For detecting caries,

this study obtained an accuracy of 89%, 88% and 82% in premolar, molar and premolar-molar, respectively, and for AUC, values of 0.917, 0.89, and 0.845 were obtained for premolar, molar and premolar-molar, respectively.

Caries from given socioeconomic and dietary factors were analyzed by Zanella-Calzada et al. employing an ANN to determine the state of health [27]. An ANN designed with seven layers, four dense layers and three dropout layers, was used in this study. A total of 9812 subjects were employed, 70% of them were used for training and the remaining 30% for testing. The results obtained an accuracy of approximately 0.69 and an AUC of 0.75.

A total of 3000 bitewings images were employed by Srivasta et al. to detect dental caries with a deep fully convolutional neural network. The results of this study were a recall of 0.805, a precision of 0.615 and a F1-score of 0.7 [22].

A total of 251 radiovisiography images were employed by Prajapati et al. to detect caries with a convolutional neural network, which achieved an accuracy of 0.875 [29].

A back-propagation neural network with a database of 105 intra-oral images was employed by Geetha et al. to detect caries. This architecture achieved an accuracy of 0.971 and a precision recall curve (PRC) area of 0.987 [31].
