Next Article in Journal
Glutathione: Lights and Shadows in Cancer Patients
Next Article in Special Issue
Optimal Volume Assessment for Serous Fluid Cytology
Previous Article in Journal
Correction: Matysiak et al. Diagnosis of Hymenoptera Venom Allergy: State of the Art, Challenges, and Perspectives. Biomedicines 2022, 10, 2170
Previous Article in Special Issue
Evaluation of Fine Needle Aspiration Cytopathology in Salivary Gland Tumors under Milan System: Challenges, Misdiagnosis Rates, and Clinical Recommendations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

Towards Artificial Intelligence Applications in Next Generation Cytopathology

by
Enrico Giarnieri
1,* and
Simone Scardapane
2
1
Cytopathology Unit, Department of Clinical and Molecular Medicine, Sant’Andrea Hospital, Sapienza University of Rome, Piazzale Aldo Moro 5, 00189 Rome, Italy
2
Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00196 Rome, Italy
*
Author to whom correspondence should be addressed.
Biomedicines 2023, 11(8), 2225; https://doi.org/10.3390/biomedicines11082225
Submission received: 22 June 2023 / Revised: 4 August 2023 / Accepted: 5 August 2023 / Published: 8 August 2023
(This article belongs to the Special Issue Next Generation Cytopathology: Current Status and Future Prospects)

Abstract

:
Over the last 20 years we have seen an increase in techniques in the field of computational pathology and machine learning, improving our ability to analyze and interpret imaging. Neural networks, in particular, have been used for more than thirty years, starting with the computer assisted smear test using early generation models. Today, advanced machine learning, working on large image data sets, has been shown to perform classification, detection, and segmentation with remarkable accuracy and generalization in several domains. Deep learning algorithms, as a branch of machine learning, are thus attracting attention in digital pathology and cytopathology, providing feasible solutions for accurate and efficient cytological diagnoses, ranging from efficient cell counts to automatic classification of anomalous cells and queries over large clinical databases. The integration of machine learning with related next-generation technologies powered by AI, such as augmented/virtual reality, metaverse, and computational linguistic models are a focus of interest in health care digitalization, to support education, diagnosis, and therapy. In this work we will consider how all these innovations can help cytopathology to go beyond the microscope and to undergo a hyper-digitalized transformation. We also discuss specific challenges to their applications in the field, notably, the requirement for large-scale cytopathology datasets, the necessity of new protocols for sharing information, and the need for further technological training for pathologists.

1. Introduction

Cytopathology is a branch of laboratory medicine that studies details of cellular morphology useful for cancer screening and early diagnosis. Compared to histopathology, cytology focuses on specific pathological features of single cells in a context of thousands of cells in a specific tissue architecture. Modification of cell properties and morphology reflect the biological status of a specific organ [1,2,3]. Cellular material is taken by using exfoliative cytology, body fluids, scraping, and aspiration cytology, and its morphological aspects are used to formulate a diagnosis using internationally recognized guidelines [4,5,6,7,8,9]. In diagnostic cytopathology it is expected that cytologists scrutinize every cell under the microscope or in gigapixel whole slide images to search for alterations, which are sometimes represented only in a few groups of cells. This can represent a challenge for the cytologists, involving highly time-consuming work and tediousness [10]. Technology involving artificial intelligence (AI) has shown remarkable progress in medicine, including image interpretation and computer assisted diagnosis both in histopathology and cytopathology. Although this process is considered in an early stage, it probably will represent the third revolution in pathology, through the introduction of AI in medical routines in which pathologists will be central to the development of algorithms and their validation [11,12]. Deep learning, as a branch of machine learning and as the major tool in the current AI wave, has greatly accelerated the development of computational cytopathology-exploiting algorithms and specific architecture designs such as multilayer perceptrons (MLP), convolutional neural networks (CNN), recurrent neural networks (RNN), and transformers [13]. In addition, in the last decades spatial computing and the metaverse have also been empowered by AI. Their synergy can create a new scenario for digital pathology in both teaching and diagnosis through platforms, devices, chatbots, and other human–machine interaction tools. In this perspective paper, we will review these advances in machine learning techniques and evaluate practical aspects for their application to digital cytopathology, including future developments and open challenges.
The paper, while based upon the experience and knowledge of the authors, provides an entry point both for pathologists interested in how AI technologies will impact the field, and for AI practitioners who want to gain a perspective on specific challenges and opportunities of this new wave of applications in the medical domain. The paper is organized as follows. In the rest of this section, we provide a brief historical perspective on the use of automation techniques in cytopathology, starting with early diagnostic systems in the 1960s up to today. In Section 2 we describe deep learning models for computer vision and classical applications in the medical field, including object detection (e.g., cell counting) and segmentation, along with some specific challenges, such as the need of improving data acquisition and quality. We then overview the use of AI-powered technologies, including virtual reality for training and visualization (Section 3), natural language processing (Section 4), and decentralized technologies (Section 5). We conclude in Section 6 with some additional comments and a summary.

From Cytology Automation to Artificial Intelligence

In cytopathology, screening for the early detection of cervical cancer was one of the largest early applications of image analysis, through the construction of platforms using microscope units, software tools for display, and tele-control. These early systems showed limitations but also advantages. In 1952, at the University of Tenneesee, Mellors et al. designed the Cytoanalyzer, the first semi-automated screener based on an optical electronic machine to speed up detection of cancer cells of the uterine cervix. The application of the Cytoanalyzer was intended to reduce the scarcity of technicians to analyze cells and improving early diagnosis of uterine cancer [14]. In the mid 1960s, Taxonomic Intra-Cellular Analytic System (TICAS) demonstrated utility in the field of automated diagnostic systems [15]. In the 1970s, Zahniser et al. developed the BioPER system, a sophisticated software to obtain a high throughput of smears per hour with a low percentage of false positives and false negatives, and they introduced a fixed cutoff of 2% “abnormal” cells on a slide to trigger an alarm [16]. By 1989, hardware and software were improved allowing systems like Leytas, Cytopress, Cervifip, and Cyto-Savant to reduce the workload, screening time, and errors through more interactive diagnostic procedures [17,18,19]. A new approach to cell classification began in the 1980s with neural network technology and the popularization of the backpropagation training algorithms applied in many areas, including cytology automation. The first commercial approach using artificial intelligence (AI) in cytopathology was the PAPNETTM, a semiautomated system based on neural network modeling. This system was introduced for quality control in smear rescreening, leaving the decision directly to the machine, through internal algorithms. The system was aimed at reducing the number of false negatives and was an additional tool for the interpretation of abnormal cells [20,21]. The interest in neural networks was renewed after 2012, with deep neural networks (DNNs) becoming the state-of-the-art solution for multiple benchmarks in the computer vision and natural language processing fields [22,23] and, more recently, with the emergence of large language models (LLMs) such as ChatGPT. DNNs demonstrated an ability to work directly on raw images [24], and they can be trained to classify, segment, and process images with extremely high accuracy in a variety of fields. Consequently, investigations have started into the use of DNNs in several medical applications, including diagnostic cytology [25].

2. Applications of Computer Vision Models to Cytopathology

As stated before, advances in machine learning have recently impacted cytopathology, providing opportunities for all pathologists in their daily work. Two important computer tasks in this context are detection and segmentation. Detection is the task of finding specific objects in an image, such as neoplastic cells in the context of normal cells. Machine learning can be used to classify the grade of atypia for each single cell by highlighting them with the proper bounding box, which requires specialized object detection networks (Figure 1). Segmentation involves categorization of each pixel in the image with a specific class, allowing a fine-grained separation of the cells from their background. While patch-based convolutional neural networks can identify and locate objects of different types, segmentation detects not only objects but also their boundaries without suffering from different staining conditions or hand-crafted features, resulting in an important tool in whole slide imaging [26,27,28,29,30,31,32]. In their daily routine, cytopathologists analyze and integrate a large amount of morphological information. Hundreds of thousands of different cell features are simultaneously examined by a human mind skilled at quick interpretation. Furthermore, modern cytology increasingly integrates clinical information, immunocytochemical staining, and molecular pathological data, especially in diagnostically difficult cases and when clinicians require prognostic factors.
From this point of view, automatic ways of cell counting, boundary identification, and cell classification in digital pathology are seen with great expectations, although there are still challenges. Cytopathologists must learn how to use algorithms correctly (including their drawbacks, described below), how they work and, above all, their clinical utility [33]. The combination of image analysis and machine learning (ML) could be the key to improving the quality assurance, reducing factors that can cause diagnostic errors. This approach would require laboratories to be equipped with specific technologies and skilled staff.

2.1. Data Acquisition and Availability

The creation of digital slide libraries, now available on different public or private platforms, is rapidly transforming digital pathology [34]. In the future, each laboratory will develop its own dataset of images, classified by type of disease, to be shared with other laboratories for educational and diagnostic purposes and to develop algorithms. To this end, operators must learn how to use novel annotation software for images (e.g., the VGG Image Annotator developed by the Visual Geometry Group) [35], and to use specialized deep neural networks (DNN) models such as convolutional neural networks (CNNs). Several convolutional neural network architectures are available to process images including medical images. EfficientNets, MobileNet, XceptionNet, and InceptionNetv3 architectures demonstrated accuracy, model’s efficiency, and low computational costs [36,37,38,39]. In cytopathology, there is an elevated level of complexity due to sample preparation types, the presence of hypercellularity with the multitude of cytologic substrates, and similarity of morphological features. These aspects requires complex mental reasoning based on a pathologist’s experience of a large data set of images. This means that the training phase must be carried out using thousands of high-quality images, each annotated by an expert pathologist, which would involve a considerable amount of time and work, to ensure that a trained network can effectively generalize across different scenarios, equipment, and laboratories. In the future, this could be solved through decentralizing image banks in various institutions and making them available through a blockchain-based network or federated learning (Section 5), or by using self-supervised algorithms.

2.2. Current Challenges and Limitations

Despite their promising performance, DNNs applications have limitations that must be acknowledged by pathologists. Firstly, they still require a large amount of expertly labelled data to be trained, especially in medicine and pathology. Fields that have such data publicly available benefit more than do fields for which this training is still ongoing, including cytology. In computer vision, this problem has been tackled by the emerging field of self-supervised learning, which allows the pre-training of neural network models (sometimes known as “foundation models”) using large sets of unlabeled images before tackling a downstream task, such as segmentation, where few labeled points are known. While some initial progress has been made in the development of foundation models for medical imaging, this is still an open challenge for cytopathology [40,41]. Secondly, DNNs are “black box” classifiers, meaning that it is generally difficult to understand why a certain image has been classified in a certain way [42]. For this reason, more recent works have sought to integrate the predictions of DNNs with techniques capable of improving interpretability and understanding by physicians who are not experts in algorithms and artificial intelligence [43,44]. However, we underline that most applications of explainability techniques today require users to be proficient in the AI models themselves. Developing explainability tools for clinicians or doctors with limited knowledge of neural networks, evaluating them in a real-world setting, and integrating them in production environments are still open challenges [45]. Third, when looking at the confidence scores in output, and not just the most probable class, most neural networks tend to be overly confident and uncalibrated, i.e., the predicted probabilities tend to underestimate the true probability of error. This is a major problem when the confidence in the output must be used in a clinical process to carefully evaluate cost-benefit trade-offs [39]. An uncalibrated model can indeed provide unrealistic confidence in certain predictions, which in turn can be problematic in a medical setting where important diagnostic decisions must be made [46].

2.3. Improving Data Acquisition and Quality

To date, digital pathology images have been obtained with devices such as microscopic cameras or slide scanners. These devices cannot make completely identical digital images, even when the image is taken using the same microscope and camera sequentially. Ogura et al. reported discordant classification results between paired digital histopathology images obtained from two independent scans using the same microscope [47]. Compared to thin prep slide preparations, conventional cytology is generally much thicker, resulting in patches of cells defocused when examined under the microscope, and this requires pathologists to change focal plane continually. Recently, deep learning methods have been reported to increase accurate cellular quantification, higher image sharpness, and the number of image details using the dual-view system compared to single-view imaging. Furthermore, defocusing problems can be addressed using domain normalization net (DNN) and refocusing net (RFN) methods to improve data set performance from cervical cytopathology images [48,49]. Overall, we expect neural networks will continue to have a significant impact in improving the data acquisition process in cytopathology laboratories, similar to the role they have on faster MRI acquisition or X-ray diagnostics.

3. Use Cases for Augmented and Virtual Reality in Cytopathology

AI-powered emerging technologies such as augmented reality (AR)/virtual reality (VR) and the metaverse can potentially create a realistic virtual world to support learning and diagnosis in digital pathology and cytopathology. With high-bandwidth 5G, and in the future 6G, ML and neural network models will become ever more widespread for different tasks and in different contexts. Through human–machine interaction tools with immersive technologies like head-mounted displays supported by AI, it will be possible for the pathologist to view whole slides in a metaverse environment and easily interact with one or more remote colleagues. However, in the virtual world there are still some technical challenges to solve, such as image quality reduction, noise, haze, blurring, and low resolution that can influence visual perception. Some preliminary CNN architectures were proposed to reduce these issues [50,51,52]. Compared to AR and VR, mixed reality (MR) has demonstrated potential utility in the metaverse due to its hybrid physical–virtual experiences, delivered via two main types of devices: holographic and immersive. In the first case, holographic technology offers the possibility of manipulating physical objects, allowing users to interact with virtual objects in a virtual world. Mixed reality technologies demonstrate many healthcare benefits when integrated with tools for preparing surgical sites [53] or for viewing whole slide images in a virtual environment [54]. To move cytopathology into a virtual scenario, specifically from an educational point of view, the technical challenges and human adaptability should be taken into consideration (Figure 2).
Currently, VR technology available for digital slide navigation does not acquire images in 3D and imaging tools do not fulfill all the requirements for fast and high-resolution acquisition. VR can be improved by the creation of a virtual projection of 2D images in a simulated 360° environment. For example, GANverse3D, introduced by NVIDIA, transforms 2D images into 3D animated objects that can be viewed and controlled in virtual environments within Nvidia Omniverse [55]. However, seeing mixed reality content such as 3D holograms will need 5G technology that can transfer data in a huge bandwidth within the shortest time possible, integrating mixed reality in medical devices or image records into holograms compatible with devices. Finally, participants without sufficient experience and time spent in the VR environment show well-documented side effects such as nausea, eyestrain, and seizures; therefore, long-term usage of VR in clinical practice deserves further investigation [56]. Summarizing these points, the next generation of devices must be improved to provide visual–interactive experiences with reduced side effects, costs, and workflow interruption, while maintaining standardization in the imaging process, which are the most important aspects to address to reduce professional reluctance to adopting new technologies. We also note that medical 3D consultation or teaching are already in the experimentation and use phases, a field where VR and AR technologies can have a significant impact. Future tasks for educational use of VR environments in medical training will be characterized by important challenges for medical educators and students. Instructors that want to apply VR environments in medical education need to properly understand each type of technology available, to facilitate student adaptation, avoid negative effects during learning activities, and ensure long-term practicality of human–computer interaction in medical routines in the future.

4. Natural Language Processing in Cytopathology

Natural language processing (NPL) concerns the application of statistical, computational, and AI models to process and analyze large amounts of text [57]. Large language models (LLMs) such as the Generative Pretrained Transformer (GPT) have emerged as the main tool in the use of neural networks for NLP. LLMs are trained using a huge amount of textual data, mostly gathered from the internet, using a combination of techniques such as next-token prediction and instruction tuning. They show a surprising level of reasoning and problem-solving capabilities, and they can be used for many different tasks such as language translation, text summarization, and dialogue systems. Importantly, they can answer questions and interact in a conversational fashion, making them accessible also to non-expert users. Although ChatGPT and similar open-source models such as LIMA are currently subject to debates on plagiarism and cheating, in some sectors such as healthcare they could make an important contribution [58,59,60]. For example, for teaching assistance LLMs might be useful to generate exercises, quizzes, and scenarios in the classroom or at home to help practice and aid through a virtual tutor that can answer students’ questions and provide feedback on their progress. In healthcare, there is a list of potentially ideal LLMs tools: virtual assistants for telemedicine in cases of remote patient monitoring, medical education for students and healthcare professionals, research, and clinical trials [61]. In cytopathology, LLMs could be used in teaching and routine diagnostics. In the first case, an LLM could help in the initial theoretical stage of study, helping students to discover bibliographic material, guidelines and to explore basic concepts of cytopathology in different organs. In diagnostics, LLMs could support a discussion about morphological aspects in a specific clinical case, within a forum between professionals and a virtual cytopathologist, in order to choose specific molecular markers to complete a diagnosis. It could facilitate the navigation of vast amounts of medical and pathological information on the internet, compare specific images in large data sets, discover literature reviews summarizing relevant articles, and take part critically in the debate about a possible diagnosis (Figure 3). Recently, large research efforts have been oriented to deep learning-based image captioning through arcitectures capable of processing images and generating language [62]. In cytopathology, it may be of interest to develop a model of image information including text captions. It may help a model to learn morphological features of an image from experts and validated text descriptions, and, therefore, have an automated capacity to output textual interpretations that are putatively consistent with its training data.

Limits of LLM Models

Unfortunately, the simplicity of interacting with a GPT-like model through a text interface hides the complexity of using it in an efficient way to obtain useful and actionable results. In particular, “prompt engineering” is emerging as a new research direction to find the best ways to elicit good responses from LLMs. For example, including “you are an expert in linear algebra with multiple years of experience” when querying a generalist model like ChatGPT on linear algebra topics can improve the quality of the answers. This means that users will need to be proficient at several emerging techniques which are still evolving in the literature, such as few-shot prompting or user-based fine-tuning, to align the model to their preferences. This creates challenges to their use and may result in models that are sub-optimal for a given task when using out-of-the-box commercial models such as ChatGPT. In addition, using LLM technologies will require users to focus more on data curation and security, to avoid models that can be hijacked to elicit sensible information memorized from their training set, or that replicate (or generate) fake or unclear information [63,64]. Recently, Peng et al. developed a generative LLM, namely GatorTronGPT, for the medical domain to evaluate its utility for research and healthcare. The study used a GPT-3 architecture with 277 billion words of clinical text mixed with English text, demonstrating the utility of synthetic clinical text generation for clinical research with linguistic readability comparable to real world clinical notes [65].

5. Decentralized Technologies in Cytopathology

Blockchain is an example of a decentralized data storage facility, that acts as a digital ledger for storage of a list of assets using cryptography technology without a centralized entity [66]. In the last decade, advanced methods that combine decentralized data storage techniques and AI models have been proposed. Cooperation between deep learning and blockchain has proven useful by removing the need to centralize data storage or to control data flow and modifications, and, for this reason, they have several applications, especially in healthcare [67,68]. For example, blockchain allows patients to assign rules for access to their medical data, permitting access to parts of their data for diagnostic consultation or research for a fixed time period. In digital pathology, high-resolution images enable physicians to collect, tag, expand, share, and analyze specific sections of the image slides, reducing the time of diagnosis [69,70]. In recent years, we have observed how AI has become crucial especially in precision medicine, where a DL model has shown potential in the prognosis and diagnosis of cancer using a vast amount of information including gene mutation status, molecular subtypes, microsatellite instability (MSI) related to histopathology in different cancer types, thus, showing important clinical applicability [71,72]. These models are characterized by large and diversified data that are trained by a single server; however, in cases of datasets located in different institutions and countries, where regulations on patient information differ, data sharing may become complicated. Decentralized model solutions to circumvent this issue are federated learning (FL) and swarm learning (SL). In FL, data resides at the original location and only model parameters are shared among participants and, possibly, a centralized orchestrator, during training. Having a federated learning model is the key to exploiting unlabeled data, that will allow multiple, geographically separated institutions to share their data with controls to protect patient privacy, while permitting access to self-supervised algorithms. SL represents a decentralized learning system that combines edge computing, blockchain-based peer-to-peer networking and coordination, preserving confidentiality, privacy, and security without a central coordinator [73,74]. For example, these methods could be applied in a cytopathology laboratory to get a faster diagnosis by using ML, when there is not enough labelled images to train the model. To solve the problem, the cytopathologist can obtain sets of images of a similar case from another laboratory. This may not be possible due to data confidentiality; however, regulations on cytopathological imaging have not been introduced yet. Both FL and SL could resolve the privacy issue.

6. Conclusions

In view of its importance in making a correct diagnosis and, thus, in selecting an appropriate course of treatment for the patient, adaptation in the clinical practice of cytopathology has become increasingly important in order to integrate this practice with the latest technological developments in AI, immersive technologies, and decentralized algorithms. When making a diagnosis using a microscope, cytopathologists need to be aware of the possibile integration of helpful AI diagnostic models, 3D modeling tools to interact with scans in a more immersive fashion, and dialog models to retrieve and query information interactively. In the near future, the challenges will mainly concern appropriate training of the domain experts. There will be a need for adequate training in technological methods that go beyond the microscope, using digital technology, the virtual environment, and AI, with all its branches and potential, and a complete understanding of the advantages and drawbacks of these technologies. The next generations of cytopathologists will certainly be more digitally adept and ready to adapt to technological change, which will facilitate their training and ability to perform tasks in the diagnostic phase.

Author Contributions

E.G. and S.S. contributed to writing, reviewing, and editing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Morrison, W.B.; DeNicola, D.B. Advantages and disadvantages of cytology and histopathology for the diagnosis of cancer. Semin. Vet. Med. Surg. (Small Anim.) 1993, 8, 222–227. [Google Scholar] [PubMed]
  2. Dey, P. Basic and Advanced Laboratory Techniques in Histopathology and Cytology, 1st ed.; Springer: Singapore, 2018; pp. 139–146. [Google Scholar]
  3. Gasparini, S. Histology versus cytology in the diagnosis of lung cancer: Is it a real advantage? J. Bronchol. Interv. Pulmonol. 2010, 17, 103–105. [Google Scholar] [CrossRef]
  4. Faquin, W.C.; Rossi, E.D.; Baldoch, Z. The Milan System for Reporting Salivary Gland Cytopathology, 1st ed.; Springer: Bazel, Switzerland, 2018. [Google Scholar]
  5. Rosenthal, D.L.; Wojcik, E.M.; Kurtycz, D.F.I. The Paris System for Reporting Urinary Cytology, 1st ed.; Springer: Cham, Switzerland, 2016. [Google Scholar]
  6. Field, A.S.; Raymond, W.A.; Rickard, M.; Arnold, L.; Brachtel, E.F.; Chaiwun, B.; Chen, L.; Di Bonito, L.; Kurtycz, D.F.I.; Lee, A.H.S.; et al. The International Academy of Cytology Yokohama System for Reporting Breast Fine Needle Aspiration Biopsy Cytopathology. Acta Cytol. 2019, 63, 257–273. [Google Scholar] [CrossRef] [PubMed]
  7. Field, A.S.; Raymond, W.A.; Schmitt, F. The International Academy of Cytology Yokohama System for Reporting Breast Fine Needle Aspiration Biopsy Cytopathology, 1st ed.; Springer Nature: Cham, Switzerland, 2020. [Google Scholar]
  8. Pitman, M.B.; Layfield, L. The Papanicolaou Society of Cytopathology System for Reporting Pancreaticobiliary Cytology: Definitions Criteria and Explanatory Notes, 1st ed.; Springer Nature: Cham, Switzerland, 2015. [Google Scholar]
  9. Nayar, R.; Wilbur, D.C. The Bethesda System for Reporting Cervical Cytology: Definitions, Criteria and Explanatory Notes, 3rd ed.; Springer International Publishing: Cham, Switzerland, 2015. [Google Scholar]
  10. De Vito, C.; Angeloni, C.; De Feo, E.; Marzuillo, C.; Lattanzi, A.; Ricciardi, W.; Villari, P.; Boccia, S. A large cross-sectional survey investigating the knowledge of cervical cancer risk etiology and the predictors of the adherence to cervical cancer screening related to mass media campaign. BioMed Res. Int. 2014, 2014, 304602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Salto-Tellez, M.; Maxwell, P.; Hamilton, P. Artificial intelligence- the third revolution in pathology. Histopathology 2019, 74, 372–376. [Google Scholar] [CrossRef] [Green Version]
  12. Abels, E.; Pantanowitz, L.; Aeffner, F.; Zarella, M.D.; van der Laak, J.; Bui, M.M.; Vemuri, V.N.; Parwani, A.V.; Gibbs, J.; Agosto-Arroyo, E.; et al. Computational pathology definitions, best practices, and recommendations for regulatory guidance: A white paper from the Digital Pathology Association. J. Pathol. 2019, 249, 286–294. [Google Scholar] [CrossRef] [Green Version]
  13. Jiang, H.; Zhou, Y.; Lin, Y.; Ronald, C.K.; Chan, R.; Jiang, C.K.; Liu, J.; Chen, H. Deep learning for computational cytology: A survey. Med. Image Anal. 2023, 84, 102691. [Google Scholar] [CrossRef]
  14. Mellors, R.C.; Glassman, A.; Papanicolaou, G.N. A microfluorometric scanning method for the detection of cancer cells in smears of exfoliated cells. Cancer 1952, 5, 458–468. [Google Scholar] [CrossRef]
  15. Wied, G.L.; Bartels, P.H.; Bahr, G.F.; Oldfield, D.G. Taxonomic intra-cellular analytic system TICAS for cell identification. Acta Cytol. 1968, 12, 180. [Google Scholar]
  16. Zahniser, D.J.; Oud, P.S.; Raaijmakers, M.C.T.; Vooijs, G.P.; van drer Walle, P.T. BIOPER: A system for the automatic prescreening of cervical smears. J. Histochem. Cytochem. 1979, 27, 635. [Google Scholar] [CrossRef] [Green Version]
  17. Ploem, J.S.; van Driel-Kuller, A.M.; Ploem-Zaaijer, J.J. Automated cell analysis for DNA studies of large cell populations using the LEYTAS image cytometry system. Pathol.-Res. Pract. 1989, 185, 671–675. [Google Scholar] [CrossRef]
  18. Carothers, A.; NcGoogan, E.; Vooijs, P.; Bird, C.; Colquhoun, M.; Eason, P.; McKie, M.; Nieuwenhuis, F.; Pitt, P.; Rutowitz, D. A collaborative trial of a semi-automatic system for slide preparation and screening in cervical cytopathology. Anal. Cell. Pathol. 1994, 7, 261–274. [Google Scholar] [PubMed]
  19. Garner, D.; Harrison, A.; MacAulay, C.; Palcic, B. Cyto-Savant and its use in automated screening of cervical smears. In Compendium on the Computerized Cytology and Histology Laboratory; Wied, G.L., Bartels, P.H., Rosenthal, D.L., Schenck, U., Eds.; Tutorials of Cytology: Chicago, IL, USA, 1994. [Google Scholar]
  20. Husain, O.A.N.; Butler, E.B.; Nayagam, M.; Mango, L.; Alonzo, A. An analysis of the variation of human interpretation: Papnet a mini-challenge. Anal. Cell. Pathol. 1994, 6, 157–163. [Google Scholar] [PubMed]
  21. Koss, L.G.; Lin, E.; Schreiber, K.; Elgert, P.; Mango, L. Evaluation of the Papnet cytologic screening system for quality control of cervical smears. Am. J. Clin. Pathol. 1994, 101, 220–229. [Google Scholar] [CrossRef] [PubMed]
  22. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  23. Thien Huynh, T.; Pham, Q.V.; Pham, X.Q.; Nguyen, T.T.; Han, Z.; Dong-Seong, K. Artificial intelligence for the metaverse: A survey. Eng. Appl. Artif. Intell. 2023, 117, 105581. [Google Scholar] [CrossRef]
  24. Ciregan, D.; Meier, U.; Schmidhuber, J. Multi-column deep neural networks for image classification. arXiv 2012, arXiv:1202.2745. [Google Scholar]
  25. Gedefaw, L.; Liu, C.-F.; Ip, R.K.L.; Tse, H.-F.; Yeung, M.H.Y.; Yip, S.P.; Huang, C.-L. Artificial Intelligence-Assisted Diagnostic Cytology and Genomic Testing for Hematologic Disorders. Cells 2023, 12, 1755. [Google Scholar] [CrossRef]
  26. Hanna, M.G.; Hanna, M.H. Current applications and challenges of artificial intelligence in pathology. Hum. Pathol. Rep. 2022, 27, 300596. [Google Scholar] [CrossRef]
  27. Shelhamer, J.L.; Long, T.D. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  28. Wang, J.; Yang, J.H.; Mao, Z.H.; Huang, C.; Huang, W.X. CNN-RNN: A unified framework for multi-label image classification. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVRP), Computer Society, Los Alamitos, CA, USA, 27–30 June 2016. [Google Scholar]
  29. Chen, Z.; Wang, G.; Li, L.L. Recurrent attentional reinforcement learning for multi-label image recognition. arXiv 2017, arXiv:1712.07465. [Google Scholar] [CrossRef]
  30. Alsubaie, N.; Trahearn, N.; Raza, S.E.A.; Snead, D.; Rajpoot, N.M. Stain deconvolution using statistical analysis of multi resolution stain colour representation. PLoS ONE 2017, 12, e0169875. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Ma, Z.; Shiao, S.L.; Yoshida, E.J.; Swartwood, S.; Huang, F.; Doche, M.E.; Chung, A.P.; Knudsen, B.S.; Gertych, A. Data integration from pathology slides for quantitative imaging of multiple cell types within the tumor immune cell infiltrate. Diagn. Pathol. 2017, 12, 69. [Google Scholar] [CrossRef]
  32. Gonzales, R.C.; Woods, R.E. Digital Image Processing; Pearson Education: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  33. McAlpine, E.D.; Michelow, P. The cytopathologist’s role in developing and evaluating artificial intelligence in cytopathology practice. Cytopathology 2020, 31, 385–392. [Google Scholar] [CrossRef] [PubMed]
  34. Xu, C.T.; Li, M.; Li, G.; Zhang, Y.; Sun, C.; Bai, N. Cervical Cell/Clumps Detection in Cytology Images Using Transfer Learning. Diagnostics 2022, 12, 2477. [Google Scholar] [CrossRef]
  35. Ullo, S.L.; Mohan, L.; Sebastianelli, A.; Ahamed, A.; Kumar, S.E.; Dwivedi, B.; Sinha, R.; Ganesh, R.S. A New Mask R-CNN-Based Method for Improved Landslide Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3799–3810. [Google Scholar] [CrossRef]
  36. Tan, M.; Le, Q. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; PMLR: Long Beach, CA, USA, 2019; Volume 97, pp. 6105–6114. [Google Scholar]
  37. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  38. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
  39. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  40. Zbontar, J.; Jing, L.; Misra, I.; LeCun, Y.; Deny, S. Barlow twins: Self-supervised learning via redundancy reduction. arXiv 2021, arXiv:2103.03230. [Google Scholar]
  41. Ghesu, F.C.; Georgescu, B.; Mansoor, A.; Yoo, Y.; Neumann, D.; Patel, P.; Vishwanath, R.S.; Balter, J.M.; Cao, Y.; Grbic, S.; et al. Self-supervised Learning from 100 million Medical Images. arXiv 2022, arXiv:2201.01283. [Google Scholar]
  42. Che, Z.; Purushotham, S.; Khemani, R.; Liu, Y. Interpretable deep models for ICU outcome prediction. AMIA Annu. Symp. Proc. 2016, 2016, 371–380. [Google Scholar]
  43. Lilli, L.; Giarnieri, E.; Scardapane, S. A Calibrated Multiexit Neural Network for Detecting Urothelial Cancer Cells. Comput. Math. Methods Med. 2021, 13, 5569458. [Google Scholar] [CrossRef] [PubMed]
  44. Aljuaid, H.; Alturki, N.; Alsubaie, N.; Cavallaro, L.; Liotta, A. Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning. Comput. Methods Programs Biomed. 2022, 223, 106951. [Google Scholar] [CrossRef] [PubMed]
  45. Krishna, S.; Han, T.; Gu, A.; Pombra, J.; Jabbari, S.; Wu, S.; Lakkaraju, H. The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective. arXiv 2022, arXiv:2202.01602. [Google Scholar]
  46. Zhang, Z.; Fu, X.; Liu, J.; Huang, Z.; Liu, N.; Fang, F.; Rao, J. Developing a Machine Learning Algorithm for Identifying Abnormal Urothelial Cells: A Feasibility Study. Acta Cytol. 2021, 65, 335–341. [Google Scholar] [CrossRef]
  47. Ogura, M.; Kiyuna, T.; Yoshida, H. Impact of blurs on machine-learning aided digital pathology image analysis. Artif. Intell. Cancer 2020, 1, 31–38. [Google Scholar] [CrossRef]
  48. Hu, B.; Li, G.; Brown, J.Q. Enhanced resolution 3D digital cytology and pathology with dual view inverted selective plane illumination microscopy. Biomed. Opt. Express 2019, 10, 3833–3846. [Google Scholar] [CrossRef] [PubMed]
  49. Geng, X.; Liu, X.; Cheng, S.; Zeng, S. Cervical cytopathology image refocusing via multi-scale attention features and domain normalization. Med. Image Anal. 2022, 81, 102566. [Google Scholar] [CrossRef]
  50. Wang, A.; Wang, J.; Liu, J.; Gu, N. AIPNet: Image-to image single image dehazing with atmospheric illumination prior. IEEE Trans. Image Process. 2019, 28, 381–393. [Google Scholar] [CrossRef] [PubMed]
  51. Jin, Z.; Iqbal, M.Z.; Bobkov, D.; Zou, W.; Li, X.; Steinbach, E. A flexible deep CNN framework for image restoration. IEEE Trans. Multimed. 2020, 22, 1055–1068. [Google Scholar] [CrossRef]
  52. Zang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2480–2495. [Google Scholar] [CrossRef] [Green Version]
  53. Liu, X.; Chau, K.Y.; Chan, H.S.; Wan, Y. A visualization analysis using the VOS viewer of literature on virtual reality technology application in healthcare. In Cases on Virtual Reality Modeling in Healthcare; IGI-Global: Pennsylvania, PA, USA, 2022; pp. 1–20. [Google Scholar]
  54. Farahani, N.; Post, R.; Duboy, J.; Ahmed, I.; Kolowitz, B.J.; Krinchai, T.; Monaco, S.E.; Fine, J.L.; Hartman, D.J.; Pantanowitz, L. Exploring virtual reality technology and the Oculus Rift for the examination of digital pathology slides. J. Pathol. Inform. 2016, 7, 22. [Google Scholar] [CrossRef]
  55. Lunz, S.; Li, Y.; Fitzgibbon, A.; Kushman, N. Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data. arXiv 2020, arXiv:2002.12674. [Google Scholar]
  56. White, P.J.; Ahmad, B.; Zahra, M. Effect of Viewing Mode on Pathfinding in Immersive Virtual Reality. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25 August 2015. [Google Scholar]
  57. Chowdhary, K.R. Natural Language Processing. In Fundamentals of Artificial Intelligence, 1st ed.; Springer: New Delhi, India, 2020; pp. 603–649. [Google Scholar]
  58. Zhou, C.; Liu, P.; Xu, P.; Iyer, S.; Sun, J.; Mao, Y.; Ma, X.; Efrat, A.; Yu, P.; Yu, L.; et al. LIMA: Less Is More for Alignment. arXiv 2023, arXiv:2305.11206. [Google Scholar]
  59. Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef] [PubMed]
  60. Mohammad, K.; Erkan, M.; Erkan, E. Will ChatGPT get you caught? Rethinking of Plagiarism Detection. arXiv 2023, arXiv:2302.04335. [Google Scholar]
  61. Gates, B. Will ChatGPT transform healthcare? Nat. Med. 2023, 29, 505–506. [Google Scholar]
  62. Stefanini, M.; Baraldi, L.; Cascianelli, S.; Fiameni, G.; Cucchiara, R. From Show to Tell: A Survey on Deep Learning-Based Image Captioning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 539–559. [Google Scholar] [CrossRef]
  63. Gadekallu, T.R.; Pham, Q.-V.; Nguyen, D.G.; Maddikunta, P.K.R.N.; Deepa, B.; Prabadevi, P.N.; Pathirana, J.; Hwang, Z.; Hwang, W.-J. Blockchain for edge of things: Applications, opportunities, and challenges. IEEE Internet Things J. 2022, 9, 964–988. [Google Scholar] [CrossRef]
  64. Ziegler, D.M.; Stiennon, N.; Wu, J.; Brown, T.B.; Radford, A.; Amodei, D.; Christiano, P.; Irving, G. Fine-Tuning Language Models from Human Preferences. arXiv 2020, arXiv:1909.08593. [Google Scholar]
  65. Peng, C.; Yang, X.; Chen, A.; Smith, K.E.; PourNejatian, N.; Costa, A.B.; Martin, C.; Flores, M.G.; Zhang, Y.; Magoc, T.; et al. A Study of Generative Large Language Model for Medical Research and Healthcare. arXiv 2023, arXiv:2305.13523. [Google Scholar]
  66. Liu, Y.; Yu, F.R.; Li, X.; Ji, H.; Leung, V.C.M. Blockchain and machine learning for communications and networking systems. IEEE Commun. Surv. Tutor. 2020, 22, 1392–1431. [Google Scholar] [CrossRef]
  67. Weng, J.; Weng, J.; Zhang, M.; Li, Y.; Luo, Z.; Luo, W. DeepChain: Auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Trans. Dependable Secur. Comput. 2021, 18, 2438–2455. [Google Scholar] [CrossRef]
  68. Park, Y.R.; Lee, E.; Na, W.; Park, S.; Lee, Y.; Lee, J. Is blockchain technology suitable for managing personal health records? Mixed-methods study to test feasibility. J. Med. Internet Res. 2019, 21, e12533. [Google Scholar] [CrossRef] [PubMed]
  69. Schmitt, M.; Maron, R.C.; Hekler, A.; Stenzinger, A.; Hauschild, A.; Weichenthal, M.; Tiemann, M.; Krahl, D.; Kutzer, H.; Utikal, J.S.; et al. Hidden variables in deep learning digital pathology and their potential to cause batch effects: Prediction model study. J. Med. Internet Res. 2021, 23, e23436. [Google Scholar] [CrossRef]
  70. Tizhoosh, H.; Pantanowitz, L. Artificial intelligence and digital pathology: Challenges and opportunities. J. Pathol. Inform. 2018, 9, 38. [Google Scholar] [CrossRef]
  71. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef] [PubMed]
  72. Hong, R.; Liu, W.; DeLair, D.; Razavian, N.; Fenyö, D. Predicting endometrial cancer subtypes and molecular features from histopathology images using multi-resolution deep learning models. Cell Rep. Med. 2021, 2, 100400. [Google Scholar] [CrossRef] [PubMed]
  73. Ming, Y.; Lu, R.J.; Chen, D.K.; Jana, L.; Rajendra, S.; Williamson, D.F.K.; Chen, T.F.; Mahmood, F. Federated learning for computational pathology on gigapixel whole slide images. Med. Image Anal. 2022, 76, 102298. [Google Scholar]
  74. Warnat-Herresthal, S.; Schultze, H.; Shastry, K.L.; Manamohan, S.; Mukherjee, S.; Garg, V.; Sarveswara, R.; Händler, K.; Pickkers, P.; Aziz, N.A.; et al. Swarm Learning for decentralized and confidential clinical machine learning. Nature 2021, 594, 265–270. [Google Scholar] [CrossRef]
Figure 1. Cytopathology image of high-grade urothelial carcinoma (HGUC) showing numerous pleomorphic tumor cells. Cell detection with the addition of bounding boxes is used to annotate selected images. Each selected cell is classified according to normal or pathological features with different colors. After a suitable dataset is built and exported, a DNN can be trained to automate the process. (Conventional cytology, Papanicolaou staining, low magnification).
Figure 1. Cytopathology image of high-grade urothelial carcinoma (HGUC) showing numerous pleomorphic tumor cells. Cell detection with the addition of bounding boxes is used to annotate selected images. Each selected cell is classified according to normal or pathological features with different colors. After a suitable dataset is built and exported, a DNN can be trained to automate the process. (Conventional cytology, Papanicolaou staining, low magnification).
Biomedicines 11 02225 g001
Figure 2. Equipped with virtual reality headsets, a clinician can visualize imaging and clinic data in an immersive fashion.
Figure 2. Equipped with virtual reality headsets, a clinician can visualize imaging and clinic data in an immersive fashion.
Biomedicines 11 02225 g002
Figure 3. Medical chatbots and LLMs (e.g., ChatGPT) can provide interactive interfaces for querying medical data and perform literature searches.
Figure 3. Medical chatbots and LLMs (e.g., ChatGPT) can provide interactive interfaces for querying medical data and perform literature searches.
Biomedicines 11 02225 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Giarnieri, E.; Scardapane, S. Towards Artificial Intelligence Applications in Next Generation Cytopathology. Biomedicines 2023, 11, 2225. https://doi.org/10.3390/biomedicines11082225

AMA Style

Giarnieri E, Scardapane S. Towards Artificial Intelligence Applications in Next Generation Cytopathology. Biomedicines. 2023; 11(8):2225. https://doi.org/10.3390/biomedicines11082225

Chicago/Turabian Style

Giarnieri, Enrico, and Simone Scardapane. 2023. "Towards Artificial Intelligence Applications in Next Generation Cytopathology" Biomedicines 11, no. 8: 2225. https://doi.org/10.3390/biomedicines11082225

APA Style

Giarnieri, E., & Scardapane, S. (2023). Towards Artificial Intelligence Applications in Next Generation Cytopathology. Biomedicines, 11(8), 2225. https://doi.org/10.3390/biomedicines11082225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop