Next Article in Journal
Intratumor Heterogeneity in Hepatocellular Carcinoma: Challenges and Opportunities
Next Article in Special Issue
TGFβ1 Secreted by Cancer-Associated Fibroblasts as an Inductor of Resistance to Photodynamic Therapy in Squamous Cell Carcinoma Cells
Previous Article in Journal
WNT/β-Catenin Pathway in Soft Tissue Sarcomas: New Therapeutic Opportunities?
Previous Article in Special Issue
In Vivo Reflectance Confocal Microscopy as a Response Monitoring Tool for Actinic Keratoses Undergoing Cryotherapy and Photodynamic Therapy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning Based Prediction of Squamous Cell Carcinoma in Ex Vivo Confocal Laser Scanning Microscopy

1
Department of Dermatology and Allergy, University Hospital, LMU Munich, 80337 Munich, Germany
2
PhD School in Clinical and Experimental Medicine, University of Modena and Reggio Emilia, 41125 Modena, Italy
3
Munich Innovation Labs GmbH, 80336 Munich, Germany
4
M3i Industry-in-Clinic-Platform GmbH, 80336 Munich, Germany
5
Dr. Phillip Frost Department of Dermatology & Cutaneous Surgery, Miller School of Medicine, University of Miami, Miami, FL 33136, USA
*
Author to whom correspondence should be addressed.
The two first authors contributed equally to the work and share the first authorship.
Cancers 2021, 13(21), 5522; https://doi.org/10.3390/cancers13215522
Submission received: 20 September 2021 / Revised: 22 October 2021 / Accepted: 29 October 2021 / Published: 3 November 2021

Abstract

:

Simple Summary

Squamous cell carcinoma is the second most common type of skin cancer, with incidence rates rising each year. Micrographic urgery is the treatment of choice for large, aggressive, or recurrent lesions. To ensure complete removal, excised tissue is frozen or embedded in paraffin, cut by a microtome, and stained for examination by an expert Mohs surgeon or a dermatopathologist. Thus, resection of tumor is performed in multiple steps, resulting in delayed wound closure, patient discomfort, longer hospital stay, and in turn, higher healthcare costs. In the last few years, ex vivo confocal laser scanning microscopy (CLSM) has shown promising results in intraoperative almost-real-time detection of skin cancers. This technology is not yet widespread in part due to necessity of training for laboratory technicians, surgeon and dermatopathologists. To increase efficiency and objectivity in the image interpretation process, we have built a machine learning model to detect squamous cell carcinoma lesions in excised tissues.

Abstract

Image classification with convolutional neural networks (CNN) offers an unprecedented opportunity to medical imaging. Regulatory agencies in the USA and Europe have already cleared numerous deep learning/machine learning based medical devices and algorithms. While the field of radiology is on the forefront of artificial intelligence (AI) revolution, conventional pathology, which commonly relies on examination of tissue samples on a glass slide, is falling behind in leveraging this technology. On the other hand, ex vivo confocal laser scanning microscopy (ex vivo CLSM), owing to its digital workflow features, has a high potential to benefit from integrating AI tools into the assessment and decision-making process. Aim of this work was to explore a preliminary application of CNN in digitally stained ex vivo CLSM images of cutaneous squamous cell carcinoma (cSCC) for automated detection of tumor tissue. Thirty-four freshly excised tissue samples were prospectively collected and examined immediately after resection. After the histologically confirmed ex vivo CLSM diagnosis, the tumor tissue was annotated for segmentation by experts, in order to train the MobileNet CNN. The model was then trained and evaluated using cross validation. The overall sensitivity and specificity of the deep neural network for detecting cSCC and tumor free areas on ex vivo CLSM slides compared to expert evaluation were 0.76 and 0.91, respectively. The area under the ROC curve was equal to 0.90 and the area under the precision-recall curve was 0.85. The results demonstrate a high potential of deep learning models to detect cSCC regions on digitally stained ex vivo CLSM slides and to distinguish them from tumor-free skin.

1. Introduction

Cutaneous squamous cell carcinoma (cSCC) is a subtype of keratinocyte cancer (KC), that usually presents as a solitary, firm papule, or plaque with a hyperkeratotic surface, on chronically sun exposed areas. Due to its high incidence [1] and risk of metastasis, it represents a major health concern [2]. Clinical diagnosis of cSCC may be challenging, due to overlapping clinical features with other skin neoplasms such as keratoacanthoma, basal cell carcinoma (BCC), Bowen’s carcinoma or Merkel cell carcinoma [3]. Therefore, a surgical excision [4] with a subsequent histopathologic evaluation play a critical role in accurate diagnosis, therapy and management [4].
Current workflow of the pathological examination relies on labor-intensive and time-consuming tissue processing procedures such as paraffin embedding and sectioning. In comparison to paraffin sections, frozen sections are less time-consuming and are therefore often preferred in situations where timely decisions are needed (e.g., Mohs surgery). Nevertheless, this rapid diagnostic process comes at the expense of partial loss in tissue cellular architecture caused by freezing artifacts, difficulty in cut resulting in tissue folds, and poor staining quality, just to name a few [5,6]. Furthermore, individual factors, such as pathologist’s level of experience and expertise may result in an inter-and intraobserver variability as well as variations in efficiency [7,8,9].
Ex vivo confocal laser scanning microscopy (ex vivo CLSM) is one of the diagnostic innovations developed to overcome current challenges in traditional pathology [10,11,12]. It allows instant, high-resolution, bedside imaging of intact freshly excised tissue samples at a subcellular level [13]. The integrated features such as optical sectioning and mosaicking enable visualization of the whole tissue samples (generally up to 25 × 25 mm) at various depths and up to a 550-fold magnification [14,15,16]. Techniques such as acetowhitening, and the possibility of fluorescent mode (FM) together with the use of contrast agents like acridine orange (AO) can reveal tissue structures or cell organelles that were not visible in reflectance mode (RM) [14,15,16]. In addition, the lately introduced digital staining (DS) software can effectively simulate the conventional hematoxylin and eosin (H&E) staining that most dermatopathologists are trained to interpret [17,18]. Ex vivo CLSM shall be distinguished from its in vivo counterpart, which is based on a single near-infrared laser source and is used in the daily practice for bedside, non-invasive and painless diagnosis of cutaneous malignancies and several skin diseases, from mite infestations to lupus [19,20,21]. Its images are displayed as grey scale and in horizontal section, so that their comparison to vertical histological sections requires a dedicated training [22,23].
A preliminary report by Longo and colleagues already described the feasibility of ex vivo CLSM for intraoperative diagnosis of cSCC. Among 47 mosaics, which corresponded to 13 tumor sections and 34 tumor margins, 43 could be evaluated and 41 agreed with the frozen histopathologic sections [24]. A subsequent study from our group further demonstrated that ex vivo CLSM using both RM and FM enables differentiation of in situ carcinoma from invasive SCC [25].
Ex vivo CLSM has also limitations. To date in fact, the majority of practicing physicians are novice readers of this emerging technology; the interpretation of acquired images requires dedicated training and experience, even for expert dermatopathologists who need to get used to the different imaging modi and to the DS [26]. In addition, inter- and intraobserver variability both by experienced and non-experienced examiners possesses a challenge [27]. Last but not least, detection of numerous features and diagnostic criteria described in the present literature may be confusing and lengthy, especially for a novice reader in an intraoperative setting where time is rather scarce [16,25,27,28].
Over the past decade, with the adoption of machine learning tools, medicine is entering the dawn of a new era. Technological advances in computer power, network speeds, big data storage systems, digital transformation in healthcare, as well as development of new advanced algorithms are accelerating the translation of artificial intelligence (AI) from bench to clinic. Medical fields, such as pathology and radiology, that rely on visual patterns and clues are already starting to adopt AI into their workflow [28,29,30]. The increasing interest in such technologies is also demonstrated by the fact that regulatory agencies in the USA and Europe have started focusing on solutions to improve the use of big data, to facilitate innovation and support public health [31,32]. This was also supported by the approval of several deep learning/machine learning based medical devices and algorithms in the last few years, which are mainly employed in radiology, cardiology, but also neurology, oncology, endocrinology, internal and emergency medicine and dermatology to support decision making, interpretation of medical images, early detection of subclinical pathologies, customize drug dosages, improve medication adherence and even to reduce waiting times [32,33].
To date several studies justified the urgent need for an accelerated transformation by demonstrating that computer assisted detection and diagnosis systems increases accuracy, while reducing time required for image interpretation [30,34,35,36]. However, in case of pathology, in part due to obstacles in implementation of a fully digital workflow and training requirements, adoption rate has been rather slow [37,38]. Provided that ex vivo CLSM images are acquired and stored digitally, integration of a machine learning algorithm is well suited for utilization of this technology. To date however, no systematic studies on the automated detection of cSCC on CLSM images have been reported. In an attempt to accomplish standardized, objective, and accurate image interpretation of cSCC images acquired by ex vivo CLSM, we tested the feasibility of integration of a machine learning algorithm into our image analysis process. We aimed to determine the diagnostic performance of a new MobileNet convolutional neural network (CNN) for the detection of cSCC and tumor-free areas on CLSM images.

2. Materials and Methods

2.1. Population

From September 2020 to April 2021, 34 freshly excised tissue samples from 29 patients were examined using the 4th generation ex vivo CLSM (Vivascope2500, MAVIG Germany GmbH, Munich, Germany). These included 22 invasive cSCC from 17 patients (5 females, 12 males, aged 61–99 years) and 12 tumor-free tissue samples of the same anatomical areas donated from the puckering skin occurring during surgical wound closures (“dog ears”) (12 patients). (Table 1) The inclusion criteria were either a histological diagnosis of a cSCC or a histological exclusion of any tumor tissue. The specimens were obtained from the Dermatosurgical Unit of the Department of Dermatology and Allergy, Ludwig Maximilian University (LMU) of Munich. Preoperatively, written informed consent was provided by each patient following the principles of the declaration of Helsinki. The study was approved by the local ethics committee of the LMU Munich (Protocol number 19-150).

2.2. Device and Staining Technique

The 4th generation ex vivo CLSM used in the current study (Vivascope2500, MAVIG Germany GmbH, Munich, Germany) combines two lasers of different wavelengths (488 nm and 638 nm), and allows the examiner to evaluate the samples in different modes up to 550-fold magnification: FM (488 nm), RM (638 nm) and overlay mode (OM). The device can also produce images that simulate the effect of H&E staining in the DS mode [39].
All fresh tissue samples were processed immediately after resection. They were stained in AO (0.04 mg/mL resp. 0.12 mmol/L, Sigma-Aldrich, St. Louis, MO, USA) for 30 s, subsequently washed in Dulbecco’s Phosphate Buffered Saline (PBS, pH 7.4) for 20 s and placed in 10% citric acid solution for 30 s. After this staining procedure, the samples were fixed on the object slide using a sponge pad and magnets, as described by Pérez-Anker et al. [40]. All samples were scanned vertically to enable exact correlation to the corresponding histological images. After ex vivo CLSM imaging, the samples were fixed in formalin, embedded in paraffin, and stained with H&E for conventional histopathological reporting.

2.3. Image Evaluation and Annotation Process

All histopathological slides were evaluated by the senior pathologist of the department. All confocal images were evaluated and segmented by a dermatosurgeon expert in imaging techniques and histopathology (DH), a dermatosurgeon expert in imaging techniques with histopathology knowledge (CR) and a trainee expert in imaging techniques with histopathology knowledge (SS) using the criteria defined by Hartmann et al. [25]. In case of discrepancy, the senior dermatopathologist was involved. Only invasive cSCC were considered for this study. After the histological and ex vivo CLSM diagnosis, the tumor tissue was manually annotated for segmentation by an expert dermatologist with experience in both ex vivo CLSM and traditional histopathology on the digitally stained ex vivo CLSM images using the program QuPath, as described by Bankhead, P. et al. [41], adapted after Arvaniti et al. [42]. All tumors were annotated and analyzed in DS mode.

2.4. Preprocessing Data and Creation of a Deep Neural Network

The original ex vivo CLSM images (roughly 10,000 × 10,000 pixels) were split into smaller mosaics of 256 × 256 pixels. Other mosaic sizes have not been tested. If more than 50% of the 256 × 256-pixel area was marked as tumor tissue, then the whole patch was considered as tumor and thus region of interest (ROI). Choosing a threshold of 50% generated a balanced representation of tumor tissue on the edges. The 22 invasive cSCCs generated in total 45,823 (39,423 tumor free and 6409 tumor) mosaics and the 12 tumor free images were split into 22,971 mosaics, which were each separately evaluated by the algorithm (Table 2).
Then a MobileNet [43], a light weight deep convolutional neural network known for using depth-wise separable convolutions, which decreases processing time (arXiv:1704.04861), was used. The report by Adam and Howard and colleagues demonstrated MobileNets to have a better performance in comparison to other popular models on ImageNet classification such as VGG 16 [43,44]. This model uses 3 × 3 depthwise separable convolutions, which equires 8 to 9 times less computation when compared to standard convolutions [43,45]. In this architecture, all layers are followed by a batchnorm [45] and rectified linear unit (ReLU) nonlinearity with the exception of the final fully connected layer, which has no nonlinearity and feeds into a softmax layer for classification [43,44]. After the convolutional blocks, we added a global average pooling layer to extrapolate the information of the convolutional layers into a single vector. Two neurons were connected, whose weights and bias were randomly initialized. For the convolutional blocks, fine-tuning of the ImageNet-learned parameters was performed, as described by Arvaniti et al. [42] to achieve superior results. The group postulated that the neural networks trained by randomly initializing the starting weights, show a tendency to overfit to the training set and become overly confident in their predictions, producing higher cross-entropy loss in case of misclassifications. On the contrary, fine-tuned networks that are additionally regularized by dropout, such as MobileNet, do not reach 100% accuracy in the training set and do not exhibit an explosion in the validation cross-entropy loss scores [42]. Of note, the random initialization of weights and bias on the 2 neurons should not have had a significant impact on our evaluation as it presents only 0.125% of the network (1026 parameters compared to 815,592 parameters in the convolutional layers).
Adam optimizer [46] was applied to adjust the weights during the training. The learning rate was 1 × 10−5. The model was trained to minimize cross-entropy loss [47]. We used a dropout of 0.2 to avoid overfitting. To further combat overfitting, additional preprocessing steps were used on the training data, e.g., rotation, zooming and flipping. One training session took roughly 4 h on a NVIDIA GeForce GTX 1080 ti.
During the evaluation phase, the trained MobileNet was applied to the entire ex vivo CLSM image in a sliding window fashion and generated pixel-level probability heatmaps for both classes (tumor and tumor-free). We used a hard threshold to extrapolate a segmentation mask from the heatmap, which corresponds to the output of the MobileNet. Analyzed mosaics were defined as ROI in case the algorithm predicted a probability higher than 0.3 as being tumorous, which we defined as the classification threshold. Before extrapolating the segmentation mask from the heatmap, Gaussian filtering of the heatmap was done, in order to enhance the visual representation of the heatmap, which had no impact on the performance. Further post processing steps were performed on the segmentation mask. Morphological operations of erosion and dilation were implemented, to remove possible isolated pixels and then restore the original segmentation mask.

2.5. Statistical Evaluation

Our sample size was not large enough to accurately estimate the model performance with a single partition and test data. Therefore, we have evaluated our model using 17-fold cross validation. After dividing the data set randomly into 17 groups, 16 served as training data, whereas the remaining one was used as a test data set for assessment. This procedure was repeated 17 times so each group could be used as a test data set once. Before statistical evaluation, a tissue separation mask was created to separate the background from the tissue. Otsu thresholding was applied for this purpose [48]. Sensitivity and specificity for detecting tumor skin were calculated for the entire dataset including all mosaics from the 22 invasive cSCCs as well as the 12 tumor-free images. The overall predictive value was analyzed by the area under the receiver operating characteristic curve (ROC) and under the precision-recall curve for the entire dataset. The area under the ROC determines whether the deep neural network is able to distinguish between true positives (cancer) and false positives in a non-casual pattern (>0.5 probability). The area under the precision-recall curve does not consider true negatives (true healthy skin) and was additionally used since we had unbalanced classes, analyzing the fraction of cancer-labeled tissue that is really cancer (precision) and the fraction of cancer tissue, which was labeled as such by the software (recall).

3. Results

To test the performance of a machine learning algorithm in detection of cSCC in images acquired by ex vivo CLSM, we collected fresh tissue scans of 22 invasive cSCCs from 17 patients (5 females and 12 males, mean age 78.6). Those were manually segmented for tumor areas and evaluated (Figure 1). Furthermore, CLSM images from 12 samples of tumor-free skin were included as a negative control. The most frequently involved body site was the head and neck area (including scalp, nose, cheeks, lips) (Table 1).
The overall sensitivity and specificity of the deep neural network in detecting cSCC and tumor free skin in the ex vivo CLSM images compared to the expert examination were 0.76 and 0.91, respectively, using a classification threshold of 0.3 along with aforementioned post processing steps. The threshold of 0.3 was used because it offered the best balance between sensitivity and specificity as it generates the point closest (measured with Euclidean distance) to (0.1) in the ROC curve. To obtain the ROC curve, we predicted heatmaps of the cases in the validation set of every fold in the cross validation (Figure 1). We grouped all the tissue into one set of pixels and calculated the specificity and sensitivity using 20 classification thresholds equally distributed between 0 and 1. The precision-recall curve was calculated in the same manner. The area under the ROC curve was equal to 0.90, the area under the precision-recall curve was 0.85 (Figure 2).
By separately analyzing cSCC group and tumor free skin group, we reached a sensitivity of 0.76 and a specificity of 0.88 for cSCC and a specificity of 0.92 for tumor free skin (sensitivity was not calculated as it was not applicable in this case) (Table 3).
Following causes of false positives were observed: sun-damaged skin of elderly patients, dense inflammatory infiltrates surrounding the tumor masses, sebaceous glands and hair follicles. In one case of the negative control group (tumor free skin), clear cut sebaceous glands were misclassified as tumor tissue (Figure 3).

4. Discussion

The surgical management of cSCC, especially in head and neck area, is often performed in multiple steps with delayed wound closure, resulting into higher healthcare costs and patient discomfort. To accelerate such time consuming diagnostic process, Mohs micrographic surgery on frozen sections is used in selected centers worldwide, but its efficacy in high-risk tumors still needs further investigation [49].
Ex vivo CLSM has opened new horizons in bedside histology since it allows rapid analysis of freshly excised tissue after a short staining time with AO. When necessary, the identical samples can be processed for conventional dermatopathology, by H&E staining or immunohistochemistry, with no harm for the tissue [11,50]. The time required for the ex vivo CLSM staining, and image acquisition usually consists of few minutes depending on the sample size. The images are acquired in all CSLM modi simultaneously, and instantly stored. This provides significant advantages since slides can be analyzed intraoperatively. Further, several reports suggest that digitalization of pathology can potentially increase the efficiency of the slide reading process and reduce the related costs [51]. Ex vivo CLSM is already being used routinely for margin mapping of BCC in selected centres worldwide, providing a valid alternative to conventional Mohs surgery. Studies demonstrating the potential use of this modality in detection of other tumor types such as melanoma, cSCC [25,39,52,53], prostate and breast cancer [26,54,55,56] are already available in the literature. For instance, different types of prostatic and periprostatic tissue could be successfully differentiated on ex vivo CLSM images [26]. In addition, potential applications of ex vivo CLSM are not limited to the field of cancer but can also be expanded to characterization of autoimmune and inflammatory skin diseases such as pemphigus, pemphigoid, lichen planus, lupus erythematosus and vasculitis [13,57,58,59,60,61].
In order to further optimize the diagnostic process, deep learning models could assist the pathologists and Mohs surgeons to rapidly and precisely detect tumor ROI [62,63]. In the field of conventional pathology, various applications incorporating machine learning algorithms have generally shown a satisfactory performance to detect cell nuclei [64], mitosis [65], glands [8] and blood vessels [66]. Moreover, machine-learning algorithms have the capability to extract subtle features in acquired images beyond what is perceived by the human eye [67], and in turn identify patterns and associations [68,69,70].
Although pathology represent an ideal field of application for AI, its routine use has been held back by technical limitations, such as the amount of time required for slide digitalisation. Since CLSM automatically acquires and stores digital images, it stands out as an optimal candidate for the development of AI-driven diagnostic workflows. Preliminary studies have reported promising results, for instance in the enhanced automated detection of BCC [71]. Combalia et al. recently described a machine-learning assisted ex vivo CLSM pathology model, which was able to diagnose BCC with a sensitivity and specificity of 88% and 91%, respectively [71]. The model performance was even more satisfying when compared to analogous studies on digitally scanned H&E stained traditional pathology slides [62,63].
Our study experimented the utilization of a light weight deep CNN, which uses depth-wise separable convolutions for the automated detection of cSCC tissue on digitally stained ex vivo CLSM images. This trained algorithm was used to produce automated tumor positivity maps, in which the cSCC regions were marked and color-coded for display. Our model performed well by demonstrating an overall sensitivity of 0.76 and overall specificity of 0.91 for distinguishing between cSCC and tumor-free regions.
Such preliminary data shed light and hope to the potential benefits of deep learning algorithms in skin pathology since they can significantly accelerate the workflow via automated detection of tumor ROI directly on digitally stained ex vivo CLSM images. Especially novice examiners can highly benefit from such detection algorithms as they can focus on diagnostically relevant parts of the image in less time and get help from the algorithm. This improves not only the diagnosis of cSCC, but also the completeness of its resection by marking the tumor margins.
The innovative AI driven pathology in the ex vivo CLSM brings a potential improvement into the Mohs surgery workflow, as well as possible reduction of the patients’ burden in terms of time spent in the operating theatre, number of local anesthesia procedures, pain, and other surgery related complications. This procedure is not only alluring in surgical resection of malignant cutaneous tumors but can be potentially applied to other types of tumor surgeries aiming to obtain free margins before a closure. Algorithms able to correctly distinguish between precancerous and cancerous lesions could play a significant role in tumor mapping in the field of skin or prostate cancer. AI could also be used for developing intelligent imaging training systems for pathologists and physicians. A similar concept could be applied to in vivo CLSM images; even if their segmentation might be harder to perform due to the horizontal orientation of the images. Considering that the interpretation of in vivo CLSM images require a high level of experience and substantial training, the benefits of AI driven algorithms might be huge. Potential application fields comprehend AI driven image interpretation at bedside for tumor recognition, but also training programs for physicians and pathologists and support to decision making in the context of telemedicine.
It is of importance to note that this study had several limitations. The pilot algorithm is limited by the small sample size. A more conspicuous data is necessary to achieve higher area under the curve and to mimic real-world scenario. Moreover, our model was based on a binary distribution, only including two categories: invasive cSCC and tumor-free skin. This excluded a priori early stages of cSCC such as actinic keratoses and Bowen’s disease, which both display a certain grade of cytoarchitectural dysplasia. Our algorithm tended to mark certain regions of elderly sun-damaged skin as ROI. Dense inflammatory infiltrates surrounding the tumor masses, sebaceous glands and hair follicles could be confounded with the tumor itself because of a seemingly alike morphology (Figure 3). Analogous issues were observed in other studies conducted on digitally scanned conventional H&E stained BCC slides [63]. Similar false positive results also do occur during assessment of CLSM images by experienced dermatohistopathologists, as these structures are difficult to differentiate [72]. Therefore, this limitation is not confined to the algorithm itself and partly stem from the reduced contrast and image quality. Rather than a binary distribution, introducing for example the feature of peritumoral inflammation as an independent category could probably increase the specificity of the algorithm
In regions where image texture of tissue was similar to that of the background, the deep learning model was not always able to distinguish between the tissue and the background. Due to irregular tissue surface, out-of-focus areas led in few cases to a reduced image quality and suboptimal contrast. A clear example of this crucial issue is represented by repeated exclusion of tumor free regions displayed as blurred, and misinterpreted by the algorithm as background. Novel AI approaches such as feature engineering can address this problem by automatically identifying out-of-focus regions and subsequently adding extra set of focus points to these areas [29]. Another algorithm pitfall happened in few cases of squamous eddies that were recognized as background due to similarity in image texture. Subtracting the background prior to training and an improved background subtraction method which integrates a deep learning algorithm may overcome these current limitations.Moreover, despeckling neural networks and stack collapsing could further optimize image quality [73]; the latter combines in fact areas of highest contrast from various optical slides acquired at different focal Z-planes, reducing false negative results arising from air artefacts [71].

5. Conclusions

To sum up, we demonstrated in this proof-of-principle study that deep learning models can detect cSCC regions on digitally stained ex vivo CLSM images and distinguish them from tumor-free areas with a good sensitivity and specificity.
This might lay the foundations for an improved micrographic surgery workflow in which AO-stained fresh tissue is scanned by ex vivo CLSM, digitally stained and automatically analyzed for cSCC. The new workflow might be more efficient and cost-effective than the currently established standard of procedures. It might also be applied when the dermatopathologist is not available on site, through a teledermatology consultation platform.
Our results should serve as a hint for developing new standardized deep neural algorithms for the automated detection of tumor tissue on ex vivo CLSM images.
Further studies with larger multi-institutional studies are warranted to develop standardized acquisition methods and to increase the sensitivity and specificity of deep neural networks for detecting cSCC and to improve the distinction between different stages of KC.

Author Contributions

Conceptualization, D.H. and C.R.; methodology, D.H., S.S., P.A., Ž.J., V.P.-L., F.N., I.K. and I.U.I.; validation, D.H., C.R., S.S., P.A. and Ž.J.; formal analysis, B.K., D.H., C.R., S.S., P.A., Ž.J., V.P.-L., F.N., I.K. and I.U.I.; investigation, D.H., S.S., P.A., Ž.J., K.P. and E.K.; resources, D.H.; data curation, S.S. and Ž.J.; writing—original draft preparation, C.R., P.A. and S.S.; writing—review and editing, E.S., D.H. and L.E.F.; supervision, D.H.; project administration, D.H., P.A.; funding acquisition, D.H. and Ž.J. All authors have read and agreed to the published version of the manuscript.

Funding

The study was partially funded by the Bayern Innovativ Förderung 2020 (SKIN-ID Projekt, Förderung Bayern Innovativ, Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie, 250.000 €).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of the LMU Munich (Protocol Number 19-150).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Fully anonymised data are available on request; The software details are intended to be protected by a patent.

Acknowledgments

The authors thank Mavig GmbH and M3i GmbH for their constant professional assistance.

Conflicts of Interest

The authors declare no relevant conflict of interest.

References

  1. Stang, A.; Khil, L.; Kajuter, H.; Pandeya, N.; Schmults, C.D.; Ruiz, E.S.; Karia, P.S.; Green, A.C. Incidence and mortality for cutaneous squamous cell carcinoma: Comparison across three continents. J. Eur. Acad. Dermatol. Venereol. 2019, 33 (Suppl. 8), 6–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Que, S.K.T.; Zwald, F.O.; Schmults, C.D. Cutaneous squamous cell carcinoma: Incidence, risk factors, diagnosis, and staging. J. Am. Acad. Dermatol. 2018, 78, 237–247. [Google Scholar] [CrossRef] [Green Version]
  3. Ryu, T.H.; Kye, H.; Choi, J.E.; Ahn, H.H.; Kye, Y.C.; Seo, S.H. Features Causing Confusion between Basal Cell Carcinoma and Squamous Cell Carcinoma in Clinical Diagnosis. Ann. Dermatol. 2018, 30, 64–70. [Google Scholar] [CrossRef] [Green Version]
  4. Work, G.; Invited, R.; Kim, J.Y.S.; Kozlow, J.H.; Mittal, B.; Moyer, J.; Olenecki, T.; Rodgers, P. Guidelines of care for the management of cutaneous squamous cell carcinoma. J. Am. Acad. Dermatol. 2018, 78, 560–578. [Google Scholar] [CrossRef] [Green Version]
  5. Jaafar, H. Intra-operative frozen section consultation: Concepts, applications and limitations. Malays. J. Med. Sci. 2006, 13, 4–12. [Google Scholar] [PubMed]
  6. Desciak, E.B.; Maloney, M.E. Artifacts in frozen section preparation. Dermatol. Surg. 2000, 26, 500–504. [Google Scholar] [CrossRef]
  7. Jaarsma, T.; Jarodzka, H.; Nap, M.; van Merrienboer, J.J.; Boshuizen, H.P. Expertise in clinical pathology: Combining the visual and cognitive perspective. Adv. Health Sci. Educ. Theory Pract. 2015, 20, 1089–1106. [Google Scholar] [CrossRef] [Green Version]
  8. Sirinukunwattana, K.; Pluim, J.P.W.; Chen, H.; Qi, X.; Heng, P.A.; Guo, Y.B.; Wang, L.Y.; Matuszewski, B.J.; Bruni, E.; Sanchez, U.; et al. Gland segmentation in colon histology images: The glas challenge contest. Med. Image Anal. 2017, 35, 489–502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Pech, O.; Schmitz, D.; Gossner, L.; May, A.; Seitz, G.; Vieth, M.; Stolte, M.; Ell, C. Inter-Observer Variability in the Diagnosis of Low-Grade Dysplasia in Pathologists: A Comparison between Experienced and In-Experienced Pathologists. Gastrointest. Endosc. 2006, 63, AB130. [Google Scholar] [CrossRef]
  10. Ragazzi, M.; Longo, C.; Piana, S. Ex Vivo (Fluorescence) Confocal Microscopy in Surgical Pathology: State of the Art. Adv. Anat. Pathol. 2016, 23, 159–169. [Google Scholar] [CrossRef] [PubMed]
  11. Krishnamurthy, S.; Brown, J.Q.; Iftimia, N.; Levenson, R.M.; Rajadhyaksha, M. Ex Vivo Microscopy: A Promising Next-Generation Digital Microscopy Tool for Surgical Pathology Practice. Arch. Pathol. Lab. Med. 2019, 143, 1058–1068. [Google Scholar] [CrossRef] [PubMed]
  12. Rajadhyaksha, M.; Menaker, G.; Flotte, T.; Dwyer, P.J.; Gonzalez, S. Confocal examination of nonmelanoma cancers in thick skin excisions to potentially guide mohs micrographic surgery without frozen histopathology. J. Investig. Dermatol. 2001, 117, 1137–1143. [Google Scholar] [CrossRef] [Green Version]
  13. Bagci, I.S.; Aoki, R.; Krammer, S.; Vladimirova, G.; Ruzicka, T.; Sardy, M.; French, L.E.; Hartmann, D. Immunofluorescence and histopathological assessment using ex vivo confocal laser scanning microscopy in lichen planus. J. Biophotonics 2020, 13, e202000328. [Google Scholar] [CrossRef]
  14. Patel, Y.G.; Nehal, K.S.; Aranda, I.; Li, Y.; Halpern, A.C.; Rajadhyaksha, M. Confocal reflectance mosaicing of basal cell carcinomas in Mohs surgical skin excisions. J. Biomed. Opt. 2007, 12, 034027. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Gareau, D.S.; Patel, Y.G.; Li, Y.; Aranda, I.; Halpern, A.C.; Nehal, K.S.; Rajadhyaksha, M. Confocal mosaicing microscopy in skin excisions: A demonstration of rapid surgical pathology. J. Microsc. 2009, 233, 149–159. [Google Scholar] [CrossRef] [Green Version]
  16. Gareau, D.S.; Li, Y.; Huang, B.; Eastman, Z.; Nehal, K.S.; Rajadhyaksha, M. Confocal mosaicing microscopy in Mohs skin excisions: Feasibility of rapid surgical pathology. J. Biomed. Opt. 2008, 13, 054001. [Google Scholar] [CrossRef] [PubMed]
  17. Gareau, D.S. Feasibility of digitally stained multimodal confocal mosaics to simulate histopathology. J. Biomed. Opt. 2009, 14, 034050. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Schuurmann, M.; Stecher, M.M.; Paasch, U.; Simon, J.C.; Grunewald, S. Evaluation of digital staining for ex vivo confocal laser scanning microscopy. J. Eur. Acad. Dermatol. Venereol. 2020, 34, 1496–1499. [Google Scholar] [CrossRef] [Green Version]
  19. Mazzilli, S.; Vollono, L.; Diluvio, L.; Botti, E.; Costanza, G.; Campione, E.; Donati, M.; Prete, M.D.; Orlandi, A.; Bianchi, L.; et al. The combined role of clinical, reflectance confocal microscopy and dermoscopy applied to chronic discoid cutaneous lupus and subacutus lupus erythematosus: A case series and literature review. Lupus 2021, 30, 125–133. [Google Scholar] [CrossRef]
  20. Scope, A.; Benvenuto-Andrade, C.; Agero, A.L.; Malvehy, J.; Puig, S.; Rajadhyaksha, M.; Busam, K.J.; Marra, D.E.; Torres, A.; Propperova, I.; et al. In vivo reflectance confocal microscopy imaging of melanocytic skin lesions: Consensus terminology glossary and illustrative images. J. Am. Acad. Dermatol. 2007, 57, 644–658. [Google Scholar] [CrossRef] [PubMed]
  21. Longo, C.; Bassoli, S.; Monari, P.; Seidenari, S.; Pellacani, G. Reflectance-mode confocal microscopy for the in vivo detection of Sarcoptes scabiei. Arch. Dermatol. 2005, 141, 1336. [Google Scholar] [CrossRef] [PubMed]
  22. Pellacani, G.; Cesinaro, A.M.; Seidenari, S. In vivo assessment of melanocytic nests in nevi and melanomas by reflectance confocal microscopy. Mod. Pathol. 2005, 18, 469–474. [Google Scholar] [CrossRef] [Green Version]
  23. Broggi, G.; Verzi, A.E.; Caltabiano, R.; Micali, G.; Lacarrubba, F. Correlation Between In Vivo Reflectance Confocal Microscopy and Horizontal Histopathology in Skin Cancer: A Review. Front. Oncol. 2021, 11, 653140. [Google Scholar] [CrossRef]
  24. Longo, C.; Ragazzi, M.; Gardini, S.; Piana, S.; Moscarella, E.; Lallas, A.; Raucci, M.; Argenziano, G.; Pellacani, G. Ex vivo fluorescence confocal microscopy in conjunction with Mohs micrographic surgery for cutaneous squamous cell carcinoma. J. Am. Acad. Dermatol. 2015, 73, 321–322. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Hartmann, D.; Krammer, S.; Bachmann, M.R.; Mathemeier, L.; Ruzicka, T.; Bagci, I.S.; von Braunmuhl, T. Ex vivo confocal microscopy features of cutaneous squamous cell carcinoma. J. Biophotonics 2018, 11, e201700318. [Google Scholar] [CrossRef]
  26. Bertoni, L.; Puliatti, S.; Reggiani Bonetti, L.; Maiorana, A.; Eissa, A.; Azzoni, P.; Bevilacqua, L.; Spandri, V.; Kaleci, S.; Zoeir, A.; et al. Ex vivo fluorescence confocal microscopy: Prostatic and periprostatic tissues atlas and evaluation of the learning curve. Virchows Arch. 2020, 476, 511–520. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Hartmann, D.; Krammer, S.; Bachmann, M.R.; Mathemeier, L.; Ruzicka, T.; von Braunmühl, T. Simple 3-criteria-based ex vivo confocal diagnosis of basal cell carcinoma. J. Biophotonics 2018, 11, e201800062. [Google Scholar] [CrossRef]
  28. Allen, B.; Agarwal, S.; Coombs, L.; Wald, C.; Dreyer, K. 2020 ACR Data Science Institute Artificial Intelligence Survey. J. Am. Coll. Radiol. 2021, 18, 1153–1159. [Google Scholar] [CrossRef]
  29. Niazi, M.K.K.; Parwani, A.V.; Gurcan, M.N. Digital pathology and artificial intelligence. Lancet Oncol. 2019, 20, e253–e261. [Google Scholar] [CrossRef]
  30. Pantanowitz, L.; Quiroga-Garza, G.M.; Bien, L.; Heled, R.; Laifenfeld, D.; Linhart, C.; Sandbank, J.; Albrecht Shach, A.; Shalev, V.; Vecsler, M.; et al. An artificial intelligence algorithm for prostate cancer diagnosis in whole slide images of core needle biopsies: A blinded clinical validation and deployment study. Lancet Digit. Health 2020, 2, e407–e416. [Google Scholar] [CrossRef]
  31. Minssen, T.; Gerke, S.; Aboy, M.; Price, N.; Cohen, G. Regulatory responses to medical machine learning. J. Law Biosci. 2020, 7, lsaa002. [Google Scholar] [CrossRef]
  32. Muehlematter, U.J.; Daniore, P.; Vokinger, K.N. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015-20): A comparative analysis. Lancet Digit. Health 2021, 3, e195–e203. [Google Scholar] [CrossRef]
  33. Benjamens, S.; Dhunnoo, P.; Mesko, B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: An online database. NPJ. Digit. Med. 2020, 3, 118. [Google Scholar] [CrossRef]
  34. Freer, T.W.; Ulissey, M.J. Screening mammography with computer-aided detection: Prospective study of 12,860 patients in a community breast center. Radiology 2001, 220, 781–786. [Google Scholar] [CrossRef]
  35. Vandenberghe, M.E.; Scott, M.L.; Scorer, P.W.; Soderberg, M.; Balcerzak, D.; Barker, C. Relevance of deep learning to facilitate the diagnosis of HER2 status in breast cancer. Sci. Rep. 2017, 7, 45938. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Nakhleh, R.E.; Grimm, E.E.; Idowu, M.O.; Souers, R.J.; Fitzgibbons, P.L. Laboratory compliance with the American Society of Clinical Oncology/college of American Pathologists guidelines for human epidermal growth factor receptor 2 testing: A College of American Pathologists survey of 757 laboratories. Arch. Pathol. Lab. Med. 2010, 134, 728–734. [Google Scholar] [CrossRef] [PubMed]
  37. Griffin, J.; Treanor, D. Digital pathology in clinical use: Where are we now and what is holding us back? Histopathology 2017, 70, 134–145. [Google Scholar] [CrossRef]
  38. Ahmad, Z.; Rahim, S.; Zubair, M.; Abdul-Ghafar, J. Artificial intelligence (AI) in medicine, current applications and future role with special emphasis on its potential and promise in pathology: Present and future impact, obstacles including costs and acceptance among pathologists, practical and philosophical considerations. A comprehensive review. Diagn. Pathol. 2021, 16, 24. [Google Scholar] [CrossRef] [PubMed]
  39. Perez-Anker, J.; Malvehy, J.; Moreno-Ramirez, D. Ex Vivo Confocal Microscopy Using Fusion Mode and Digital Staining: Changing Paradigms in Histological Diagnosis. Actas Dermo-Sifiliográficas (Engl. Ed.) 2020, 111, 236–242. [Google Scholar] [CrossRef] [PubMed]
  40. Perez-Anker, J.; Puig, S.; Malvehy, J. A fast and effective option for tissue flattening: Optimizing time and efficacy in ex vivo confocal microscopy. J. Am. Acad. Dermatol. 2020, 82, e157–e158. [Google Scholar] [CrossRef]
  41. Bankhead, P.; Loughrey, M.B.; Fernandez, J.A.; Dombrowski, Y.; McArt, D.G.; Dunne, P.D.; McQuaid, S.; Gray, R.T.; Murray, L.J.; Coleman, H.G.; et al. QuPath: Open source software for digital pathology image analysis. Sci. Rep. 2017, 7, 16878. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Arvaniti, E.; Fricker, K.S.; Moret, M.; Rupp, N.; Hermanns, T.; Fankhauser, C.; Wey, N.; Wild, P.J.; Ruschoff, J.H.; Claassen, M. Automated Gleason grading of prostate cancer tissue microarrays via deep learning. Sci. Rep. 2018, 8, 12054. [Google Scholar] [CrossRef] [PubMed]
  43. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  44. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  45. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 448–456. [Google Scholar]
  46. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  47. Zhang, Z.; Sabuncu, M.R. Generalized cross entropy loss for training deep neural networks with noisy labels. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 3–8 December 2018. [Google Scholar]
  48. Otsu, N. A threshold selection method from gray level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  49. Jefferson, G.D. The Role of Mohs Surgery in Cutaneous Head and Neck Cancer. Otolaryngol. Clin. N. Am. 2021, 54, 439–447. [Google Scholar] [CrossRef]
  50. Hartmann, D.; Ruini, C.; Mathemeier, L.; Dietrich, A.; Ruzicka, T.; von Braunmühl, T. Identification of ex-vivo confocal scanning microscopic features and their histological correlates in human skin. J. Biophotonics 2016, 9, 376–387. [Google Scholar] [CrossRef]
  51. Hanna, M.G.; Reuter, V.E.; Samboy, J.; England, C.; Corsale, L.; Fine, S.W.; Agaram, N.P.; Stamelos, E.; Yagi, Y.; Hameed, M.; et al. Implementation of Digital Pathology Offers Clinical and Operational Increase in Efficiency and Cost Savings. Arch. Pathol. Lab. Med. 2019, 143, 1545–1555. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Hartmann, D.; Krammer, S.; Vural, S.; Bachmann, M.R.; Ruini, C.; Sardy, M.; Ruzicka, T.; Berking, C.; von Braunmuhl, T. Immunofluorescence and confocal microscopy for ex-vivo diagnosis of melanocytic and non-melanocytic skin tumors: A pilot study. J. Biophotonics 2018, 11, e201700211. [Google Scholar] [CrossRef]
  53. Hartmann, D.; Ruini, C.; Mathemeier, L.; Bachmann, M.R.; Dietrich, A.; Ruzicka, T.; von Braunmuhl, T. Identification of ex-vivo confocal laser scanning microscopic features of melanocytic lesions and their histological correlates. J. Biophotonics 2017, 10, 128–142. [Google Scholar] [CrossRef]
  54. Puliatti, S.; Bertoni, L.; Pirola, G.M.; Azzoni, P.; Bevilacqua, L.; Eissa, A.; Elsherbiny, A.; Sighinolfi, M.C.; Chester, J.; Kaleci, S.; et al. Ex vivo fluorescence confocal microscopy: The first application for real-time pathological examination of prostatic tissue. BJU Int. 2019, 124, 469–476. [Google Scholar] [CrossRef] [PubMed]
  55. Rezende, R.M.; Lopes, M.E.; Menezes, G.B.; Weiner, H.L. Visualizing Lymph Node Structure and Cellular Localization using Ex-Vivo Confocal Microscopy. J. Vis. Exp. 2019, 150, e59335. [Google Scholar] [CrossRef] [PubMed]
  56. Krishnamurthy, S.; Cortes, A.; Lopez, M.; Wallace, M.; Sabir, S.; Shaw, K.; Mills, G. Ex Vivo Confocal Fluorescence Microscopy for Rapid Evaluation of Tissues in Surgical Pathology Practice. Arch. Pathol. Lab. Med. 2018, 142, 396–401. [Google Scholar] [CrossRef] [Green Version]
  57. Bagci, I.S.; Aoki, R.; Krammer, S.; Ruzicka, T.; Sardy, M.; French, L.E.; Hartmann, D. Ex vivo confocal laser scanning microscopy for bullous pemphigoid diagnostics: New era in direct immunofluorescence? J. Eur. Acad. Dermatol. Venereol. 2019, 33, 2123–2130. [Google Scholar] [CrossRef] [PubMed]
  58. Bagci, I.S.; Aoki, R.; Krammer, S.; Ruzicka, T.; Sardy, M.; Hartmann, D. Ex vivo confocal laser scanning microscopy: An innovative method for direct immunofluorescence of cutaneous vasculitis. J. Biophotonics 2019, 12, e201800425. [Google Scholar] [CrossRef]
  59. Bertoni, L.; Azzoni, P.; Reggiani, C.; Pisciotta, A.; Carnevale, G.; Chester, J.; Kaleci, S.; Reggiani Bonetti, L.; Cesinaro, A.M.; Longo, C.; et al. Ex vivo fluorescence confocal microscopy for intraoperative, real-time diagnosis of cutaneous inflammatory diseases: A preliminary study. Exp. Dermatol. 2018, 27, 1152–1159. [Google Scholar] [CrossRef]
  60. Bağcı, I.S.; Aoki, R.; Vladimirova, G.; Sárdy, M.; Ruzicka, T.; French, L.E.; Hartmann, D. Simultaneous immunofluorescence and histology in pemphigus vulgaris using ex vivo confocal laser scanning microscopy. J. Biophotonics 2021, 13, e202000328. [Google Scholar] [CrossRef]
  61. Bağcı, I.S.; Aoki, R.; Vladimirova, G.; Ergün, E.; Ruzicka, T.; Sárdy, M.; French, L.E.; Hartmann, D. New-generation diagnostics in inflammatory skin diseases: Immunofluorescence and histopathological assessment using ex vivo confocal laser scanning microscopy in cutaneous lupus erythematosus. Exp. Dermatol. 2020, 30, 684–690. [Google Scholar] [CrossRef]
  62. van Zon, M.C.M.; van der Waa, J.D.; Veta, M.; Krekels, G.A.M. Whole-slide margin control through deep learning in Mohs micrographic surgery for basal cell carcinoma. Exp. Dermatol. 2021, 30, 733–738. [Google Scholar] [CrossRef]
  63. Sohn, G.K.; Sohn, J.H.; Yeh, J.; Chen, Y.; Brian Jiang, S.I. A deep learning algorithm to detect the presence of basal cell carcinoma on Mohs micrographic surgery frozen sections. J. Am. Acad. Dermatol. 2021, 84, 1437–1438. [Google Scholar] [CrossRef] [PubMed]
  64. Hou, L.; Nguyen, V.; Kanevsky, A.B.; Samaras, D.; Kurc, T.M.; Zhao, T.; Gupta, R.R.; Gao, Y.; Chen, W.; Foran, D.; et al. Sparse Autoencoder for Unsupervised Nucleus Detection and Representation in Histopathology Images. Pattern Recognit. 2019, 86, 188–200. [Google Scholar] [CrossRef]
  65. Veta, M.; van Diest, P.J.; Willems, S.M.; Wang, H.; Madabhushi, A.; Cruz-Roa, A.; Gonzalez, F.; Larsen, A.B.; Vestergaard, J.S.; Dahl, A.B.; et al. Assessment of algorithms for mitosis detection in breast cancer histopathology images. Med. Image Anal. 2015, 20, 237–248. [Google Scholar] [CrossRef] [Green Version]
  66. Clymer, D.; Kostadinov, S.; Catov, J.; Skvarca, L.; Pantanowitz, L.; Cagan, J.; LeDuc, P. Decidual Vasculopathy Identification in Whole Slide Images Using Multiresolution Hierarchical Convolutional Neural Networks. Am. J. Pathol. 2020, 190, 2111–2122. [Google Scholar] [CrossRef] [PubMed]
  67. Dance, A. AI spots cell structures that humans can’t. Nature 2021, 592, 154–155. [Google Scholar] [CrossRef]
  68. Savage, N. How AI is improving cancer diagnostics. Nature 2020, 579, S14–S16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Sotoudeh, H.; Shafaat, O.; Bernstock, J.D.; Brooks, M.D.; Elsayed, G.A.; Chen, J.A.; Szerip, P.; Chagoya, G.; Gessler, F.; Sotoudeh, E.; et al. Artificial Intelligence in the Management of Glioma: Era of Personalized Medicine. Front. Oncol. 2019, 9, 768. [Google Scholar] [CrossRef]
  70. Forsch, S.; Klauschen, F.; Hufnagl, P.; Roth, W. Artificial Intelligence in Pathology. Dtsch. Arztebl. Int. 2021, 118, 194–204. [Google Scholar] [CrossRef]
  71. Combalia, M.; Garcia, S.; Malvehy, J.; Puig, S.; Mulberger, A.G.; Browning, J.; Garcet, S.; Krueger, J.G.; Lish, S.R.; Lax, R.; et al. Deep learning automated pathology in ex vivo microscopy. Biomed. Opt. Express 2021, 12, 3103–3116. [Google Scholar] [CrossRef]
  72. Grupp, M.; Illes, M.; Mentzel, J.; Simon, J.C.; Paasch, U.; Grunewald, S. Routine application of ex vivo confocal laser scanning microscopy with digital staining for examination of surgical margins in basal cell carcinomas. J. Dtsch. Dermatol. Ges. 2021, 19, 685–692. [Google Scholar] [CrossRef]
  73. Sarode, M.; Deshmukh, P. Reduction of Speckle Noise and Image Enhancement of Images Using Filtering Technique. Int. J. Adv. Technol. 2011, 2011, 30–38. [Google Scholar]
Figure 1. Image processing beginning from the digital staining mode of the ex vivo confocal image (A) as the ground truth; to the segmentation step performed by an expert, highlighted in green (B), heatmap overlaid on the tissue, the colormap used is jet, which ranges from blue to red, where red signifies tumor tissue and blue signifies tumor free tissue (C), as well as tissue marked as ROI by the algorithm (D), heatmap (E) and the tissue separation mask (F).
Figure 1. Image processing beginning from the digital staining mode of the ex vivo confocal image (A) as the ground truth; to the segmentation step performed by an expert, highlighted in green (B), heatmap overlaid on the tissue, the colormap used is jet, which ranges from blue to red, where red signifies tumor tissue and blue signifies tumor free tissue (C), as well as tissue marked as ROI by the algorithm (D), heatmap (E) and the tissue separation mask (F).
Cancers 13 05522 g001
Figure 2. Precision-recall (A) and ROC curves (B).
Figure 2. Precision-recall (A) and ROC curves (B).
Cancers 13 05522 g002
Figure 3. One of the pitfalls of the algorithm; tendency to misclassify sebaceous glands as tumor tissue. (A) Ground truth (tumor-free tissue with sebaceous glands). (B) Segmented areas by the algorithm considered as tumor.
Figure 3. One of the pitfalls of the algorithm; tendency to misclassify sebaceous glands as tumor tissue. (A) Ground truth (tumor-free tissue with sebaceous glands). (B) Segmented areas by the algorithm considered as tumor.
Cancers 13 05522 g003
Table 1. Demographic data of the cSCC patients included in the study (17 patients with 22 tumors), y = years.
Table 1. Demographic data of the cSCC patients included in the study (17 patients with 22 tumors), y = years.
Epidemiological Data
AgeAverage (y)78.6
GenderFemale (%)29.4
Male (%)70.6
Table 2. Number of mosaics considered ‘tumor-free’ and ‘carcinogenic’.
Table 2. Number of mosaics considered ‘tumor-free’ and ‘carcinogenic’.
Group/Number of MosaicsTumor-Free Mosaics Tumor Mosaics
Cutaneous squamous cell carcinoma (n = 22)39,4236409
Tumor-free skin (n = 12)22,9710
Total 62,3946409
Table 3. Sensitivity, Specificity, Area under the precision-recall curve, area under the ROC curve.
Table 3. Sensitivity, Specificity, Area under the precision-recall curve, area under the ROC curve.
MetricValue
Sensitivity0.76
Specificity0.91
Area under ROC curve0.90
Area under precision-recall curve0.85
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ruini, C.; Schlingmann, S.; Jonke, Ž.; Avci, P.; Padrón-Laso, V.; Neumeier, F.; Koveshazi, I.; Ikeliani, I.U.; Patzer, K.; Kunrad, E.; et al. Machine Learning Based Prediction of Squamous Cell Carcinoma in Ex Vivo Confocal Laser Scanning Microscopy. Cancers 2021, 13, 5522. https://doi.org/10.3390/cancers13215522

AMA Style

Ruini C, Schlingmann S, Jonke Ž, Avci P, Padrón-Laso V, Neumeier F, Koveshazi I, Ikeliani IU, Patzer K, Kunrad E, et al. Machine Learning Based Prediction of Squamous Cell Carcinoma in Ex Vivo Confocal Laser Scanning Microscopy. Cancers. 2021; 13(21):5522. https://doi.org/10.3390/cancers13215522

Chicago/Turabian Style

Ruini, Cristel, Sophia Schlingmann, Žan Jonke, Pinar Avci, Víctor Padrón-Laso, Florian Neumeier, Istvan Koveshazi, Ikenna U. Ikeliani, Kathrin Patzer, Elena Kunrad, and et al. 2021. "Machine Learning Based Prediction of Squamous Cell Carcinoma in Ex Vivo Confocal Laser Scanning Microscopy" Cancers 13, no. 21: 5522. https://doi.org/10.3390/cancers13215522

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop