Next Article in Journal
The Caregiver Support Ratio in Europe: Estimating the Future of Potentially (Un)Available Caregivers
Next Article in Special Issue
The Artificial Intelligence in Digital Radiology: Part 2: Towards an Investigation of acceptance and consensus on the Insiders
Previous Article in Journal
Three-Dimensional Dental Analysis for Sex Estimation in the Italian Population: A Pilot Study Based on a Geometric Morphometric and Artificial Neural Network Approach
Previous Article in Special Issue
Artificial Intelligence in Digital Pathology: What Is the Future? Part 2: An Investigation on the Insiders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning on Histopathology Images for Breast Cancer Classification: A Bibliometric Analysis

by
Siti Shaliza Mohd Khairi
1,2,
Mohd Aftar Abu Bakar
2,*,
Mohd Almie Alias
2,
Sakhinah Abu Bakar
2,
Choong-Yeun Liong
2,†,
Nurwahyuna Rosli
3 and
Mohsen Farid
4
1
Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Shah Alam 40450, Malaysia
2
Department of Mathematical Sciences, Faculty of Science & Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
3
Department of Pathology, Faculty of Medicine, Hospital Canselor Tuanku Muhriz, Universiti Kebangsaan Malaysia, Jalan Yaacob Latif, Bandar Tun Razak, Cheras, Kuala Lumpur 56000, Malaysia
4
Department of Computing and Mathematics, University of Derby, Kedleston Road, Derby DE22 1GB, UK
*
Author to whom correspondence should be addressed.
The author is deceased.
Healthcare 2022, 10(1), 10; https://doi.org/10.3390/healthcare10010010
Submission received: 15 November 2021 / Revised: 7 December 2021 / Accepted: 12 December 2021 / Published: 22 December 2021

Abstract

:
Medical imaging is gaining significant attention in healthcare, including breast cancer. Breast cancer is the most common cancer-related death among women worldwide. Currently, histopathology image analysis is the clinical gold standard in cancer diagnosis. However, the manual process of microscopic examination involves laborious work and can be misleading due to human error. Therefore, this study explored the research status and development trends of deep learning on breast cancer image classification using bibliometric analysis. Relevant works of literature were obtained from the Scopus database between 2014 and 2021. The VOSviewer and Bibliometrix tools were used for analysis through various visualization forms. This study is concerned with the annual publication trends, co-authorship networks among countries, authors, and scientific journals. The co-occurrence network of the authors’ keywords was analyzed for potential future directions of the field. Authors started to contribute to publications in 2016, and the research domain has maintained its growth rate since. The United States and China have strong research collaboration strengths. Only a few studies use bibliometric analysis in this research area. This study provides a recent review on this fast-growing field to highlight status and trends using scientific visualization. It is hoped that the findings will assist researchers in identifying and exploring the potential emerging areas in the related field.

1. Introduction

Cancer may arise from almost any part of the human body where cells start to grow uncontrollably [1]. Deaths caused by cancers keep increasing every year and are considered as the main illness globally [2,3,4]. Breast cancer is one of the top illnesses contributing to the highest death rates among women, especially in developing countries such as Melanesia, Western Africa, Australia, Micronesia/Polynesia, and the Caribbean [5]. However, it is noticeable that the percentage of breast cancer cases in Australia, Western Europe, Northern America, and Northern Europe are the highest [5,6]. Women are commonly diagnosed with breast cancer, but men, however, are not excluded [7]. The breast structure of women is mainly made up of milk ducts, lobules, and adipose tissue [8]. Breast cancer may initiate in the ducts which carry milk to the nipple or in the lobules glands, the part of the breast that produces breast milk [8,9]. Globally, the majority of breast cancers are of ductal and lobular subtypes, given that 40–75% are comprised of ductal subtypes of all reported cases [10].
Early diagnosis and treatment may benefit in preventing breast cancer from developing to the advanced cancer level. There are several medical imaging procedures for breast cancers such as mammograms (X-rays), ultrasound (sound waves/sonography), magnetic resonance imaging (MRI), and biopsy [11,12,13,14]. However, the use of breast cancer images to confirm the cancer region is only available through biopsy procedures [15]. Tissue biopsy examination is currently the clinical gold standard in cancer diagnosis. Tissue biopsy produces histopathology images that can enhance the results of breast cancer classification [16]. The basic procedure in biopsy is collecting a tissue sample from the body for further analysis by the histopathologist [17]. The tissue will be immersed in the formalin solution and planted in paraffin wax before being cut carefully, resulting in histopathology slides which then converted to images [18,19]. However, the manual procedure of biopsy analysis is tedious, time-consuming, and restricted by the quality of the histopathology image and the histopathologists’ skill [20,21]. The histopathology images are stored and analyzed using the Computer-Aided Diagnosis (CAD) system [22]. The CAD system is used to overcome the issue of classification accurateness from manual approaches [23], and machine learning techniques are required [24].
The involvement of machine learning algorithms could help to reduce the number of unnecessary biopsy images. For an image analysis, there are four important stages to be considered: (i) input, (ii) feature extraction and selection, (iii) classifier model, and (iv) classifier output. According to Nahid and Kong [8], feature extraction and representation approaches are important to produce accurate and reliable results. There are two types of features which are hand-crafted features and learned features. Expert-level knowledge is required for hand-crafted feature extraction during image analysis [25]. A predefined hand-crafted feature is important in traditional machine learning methods, such as support vector machine (SVM), Naïve Bayes, random forest (RF), and k-means clustering. For example, [26] used regional and localized features with SVM as a classifier to evaluate the quality of 3D images. On the other hand, wavelet transform was applied to tree-structured algorithm for automatic image grading in two datasets with different magnification factors [27]. The authors used k-means clustering and texture features to locate the affected regions in the segmentation process. Similarly, [28] also used the wavelet transform to extract the features from breast cancer images and SVM classifier meant for feature selection. The result indicates that the combination of SVM classifier and chain-like agent genetic algorithm (CAGA) to obtain the optimal feature set was remarkable, with an accuracy of 96.19%.
The majority of the studies are limited to a macroscopic overview of breast cancer image classification. Specific visual bibliometric analysis is relatively low. Based on the bibliometric analysis, this research aims to present updated and microscopic overview characteristics of breast cancer image classification publications. The clear and informative maps offered in this work highlight research accomplishments in the deep learning on breast cancer image classification domain, which may aid researchers and practitioners in identifying the underlying implications of authors, journals, countries, references, and research themes. The co-authorship network analysis is believed to give some insight on the intellectual collaboration and interaction between researchers. In detail, the focuses of the paper are: (i) to examine the number of papers on the rise of publications and citations on deep learning approaches published from the years 2014 to 2021, (ii) to map the co-authorship networks among countries, authors, and scientific journals, and (iii) to analyze the co-occurrence network of the authors’ keywords globally.
This paper hopes that the findings will help to initiate ideas for future research in the related field and, in turn, will benefit the patients and healthcare providers. This study is also important as guidance for researchers that are unfamiliar with deep learning but interested in its potential in breast cancer image classification, where most active researchers and recent significant research topics among authors are discovered. This study specifically highlighted the application of deep learning instead of machine learning since recently, the field has been more strongly associated with image classification. Based on the overview of the progress, it is estimated that deep learning will continue to evolve and flourish as a significant tool for image classification.

2. Breast Cancer Image Classification

2.1. Bibliometric Analysis of Breast Cancer Studies

Bibliometrics can visualize the structure of the scientific disciplines based on the bibliographic information gained from the databases [29]. Bibliometrics have been used in vast scientific areas to analyze prior studies’ trends and patterns, such as web accessibility, text mining, sustainable business, and healthcare [29,30,31,32]. Some bibliometric studies have discussed breast-cancer-related topics. Cinar [33] provided a bibliometric analysis on 2734 articles related to breast cancer focused on the nursing field from the year 2009–2018. Based on the keyword analysis, the term “breast cancer survivor” was highly cited in year 2014 to 2018, and research showed a progressive trend of breast cancer related to the nursing field within those five years. Salod and Singh [34] studied the publication trends, country collaboration, author productivity, institutional collaboration, and productive journal based on the literatures related to breast cancer in the field of machine learning.
In a recent review, Joshi et al. [35] studied machine learning methods towards breast cancer histopathology images. Machine learning is a subset of artificial intelligence which includes statistical methods that can improve and learn the information directly from data. They pointed out that there was a growing interest in machine learning and histopathology images of breast cancer. Based on keyword analysis, their study revealed that disease in female, breast cancer, deep learning, histopathology and medical imaging are the top important keywords [35]. This showed that machine learning applications offered a potential research trend towards medical images analysis. However, the final performance of image analysis relies on the pre-processing data, including hand-crafted features extraction which is hard to solve by using traditional machine learning methods [25,36]. With the technological evolution of deep learning and rapid growth research of the application in healthcare, especially breast cancer, understanding the development of deep learning has become essential.

2.2. Breast Cancer Image Database

Breast cancer is a common cancer type among people, especially women, around the world. An early detection of breast cancer would lead to an appropriate treatment which might increase the survival rate of affected people [35]. Hence, a well-defined database is important to measure the performance of breast cancer classification models. There are several databases that are publicly available for breast cancer diagnosis such as Mammography Image Analysis Society (MIAS), Wisconsin Breast Cancer Dataset (WBCD), Digital Database for Screening Mammography (DDSM), Breast Cancer Histopathology (BreakHis), and Breast Cancer Histology (BACH). Since deep learning is gaining the fame for its ability to process image data in hierarchical representation using nonlinear transformations [25,37], hence the histopathology images are broadly used by researchers. The BreakHis and BACH datasets were made up of histopathology images. According to Li et al. [38], BreakHis dataset is extensively used in CNN algorithms related to image classification. They propose a new CNN architecture that uses local information in the breast cancer images and extra features extraction through different dense blocks and SENet module.
The BreakHis dataset was first introduced in 2016, which comprised 7909 histopathology images collected from the P & D Laboratory, Brazil [39]. Nahid and Kong stated that after the introduction of BreakHis dataset, there were about 20 articles published within a year from 2016 to 2017. Out of the total images, 2480 were benign images and 5429 malignant images with four different magnification factors. Table 1 showed detailed image distribution based on the magnification factors 40×, 100×, 200×, and 400×. Similarly, the BACH dataset [40] is also available in three-channel RGB color of histopathology images. The biopsy tissues collected were stained with standard staining protocol, hematoxylin and eosin (H&E), which results in a total of 400 histopathology images.

2.3. Breast Cancer Image Classification using Deep Learning Approaches

In earlier studies, the classification of breast cancer images centralized on traditional machine learning methods such as Support Vector Machine (SVM) [41,42,43], Naïve Bayes [44,45,46], and Random Forest [47,48]. Machine learning involves the algorithms design and deployment to assess data and corresponding attributes without any prior task based on predetermined inputs from the environment [49]. Traditional machine learning methods rely on the quality of feature extraction that is limited to certain problems resulting from its shallow classifier [25]. Lately, the deep learning methods have been proven for more promising results specifically on large and complex data [50]. The implementation of feature learning methods (transfer learning) in deep learning helps to reduce the computational time, yet it obtained significant accuracy value compared to machine learning with hand-crafted features [51]. Generally, deep learning in CAD system outperformed the traditional approach because the automatic learning feature was created to analyze the variation and complexity of images directly; hence, convolutional neural network (CNN) is the most common model used for breast cancer diagnosis [52,53]. In 2020, Lin and Jeng [54] proposed a CNN model with uniform experimental design (UED) to classify breast cancer histopathology images. Their model outperformed other established deep learning models with lowest computational time. Current computing power can help to solve the related problems and further improve the quality of health and life among the community.
Deep learning is an established and emerging approach among researchers in the field of machine learning [55,56]. The main objective when employing deep learning is to discover multiple levels representations based on learning algorithm which are aimed for higher-level features for image classification and identification [50,57,58]. Generally, it is focused on learning algorithm that is able to learn, develop, and improve on its own to process data. Deep learning algorithms can extract features from high-dimensional images for internal representation [16]. Traditional machine learning works well with structured data with up to hundreds of features or characteristics. Unfortunately, for unstructured data, the analysis process will become tedious, or worse: unfeasible. Unstructured data are data stored in unstructured format and not prescribed by data models such as image, media, text data, and audio. Deep learning models different fundamental or needed qualities in data using a model architecture that is made up of different processing layers and non-linear variations [50,55,59].
It has been observed that researchers’ attention has recently shifted to deep learning because of its great success in solving problems related to unstructured data. Convolutional Neural Network (CNN) is a part of deep learning models that can be used for image classification and feature extraction effectively [60]. In the medical field, deep learning provides a useful approach for assisting radiologists in making an early breast cancer diagnosis with histopathology images [59,61]. Breast cancer classification, signal processing [62], and image analysis [63] have benefited from deep learning methods in recent years.
In 2021, Zuluaga-Gomez et al. [60] designed a deep learning architecture from CNN to detect patterns visually on thermal images (DMR-IR database). They proposed a Bayesian optimization, Tree Parzen Estimator (TPE), as the hyper-parameter to optimize the algorithm. Experimental results showed competitive improvement of the CNN approach with an accuracy of 92%. The study also proved that data pre-processing and data augmentation help in improving the model performance. Similarly, Alom et al. [63] presented a novel CNN approach based on inception and residual networks for breast cancer multi-classification with different data augmentation methods. The experiments showed improvement of accuracy by approximately 1.05% (image-level) and 0.55% (patient-level) as compared with models that were based on learning and were data-driven for multi-classification.
With the aim of detecting and identifying breast cancer, Hirra et al. [59] applied a patch-based deep learning approach, Deep Belief Network (DBN), for automatic features extraction on histopathology images. The proposed model, namely, Ps-DBN-BC, gained a promising result with an accuracy greater than 85%, hence outperforming the 17-layer CNN architecture. This work indicated that architecture with deeper layers does not necessarily provide outstanding performance. Hameed et al. [61] developed an ensemble deep learning approach for histopathology images to classify carcinoma or non-carcinoma images automatically. They used two pre-trained deep CNN-based models for excellent convergence results in a small dataset, and the accuracy obtained was 95.29%. On the other hand, deep learning also benefited the signal processing area, as presented by Pavithra et al. on the effectiveness of thermography for breast cancer detection with appropriate choice of feature extraction, segmentation, and classification algorithms [62].

3. Materials and Methods

3.1. Bibliometric Analysis

The implementation of bibliometric analysis at the beginning of the research process is popular among researchers because it helps to discover the information underlying the published articles in specific areas or topics [64,65]. Although there are different methods to explore and organize earlier findings from the literature search, bibliometrics has advantages in terms of being a systematic, understandable, and reproducible review process [66]. A detailed bibliometric analysis can capture the growth of particular research studies in a given time period [67]. The bibliometric networks were visualized using R Programming Language [68] and VOSviewer software [69]. This study executed co-authorship and co-occurrence analyses for network mapping. A bibliometric analysis was used to study the relationship of scientific publications among countries and authors by constructing and visualizing the network maps.

3.2. Data Collection

The data retrieval process involved the Scopus database, retrieved on 22 October 2021. Scopus is one of the largest relevant academic abstracts and indexing databases from Elsevier [30,70]. Scopus is also more effective for health-related topic searches compared to other databases such as PubMed and Web of Science [71,72]. The bibliometric analysis reviewed all related published articles between the year 2014 to 2021.
Articles included in the research focused on histopathology breast cancer images. For further analysis, articles that mentioned deep learning, convolutional neural network, transfer learning, breast cancer, breast neoplasm, breast tumor and breast diagnostic were included. The articles were selected based on the abstract reviewed. All articles are available for download, and non-English articles were excluded.
The study is focused on deep learning algorithms for breast cancer image classification. Hence, articles that use conventional neural networks or other machine learning techniques such as regression, clustering, and decision trees were excluded from this research. Deep learning algorithms have gained huge interest in biomedical image analysis [73,74]. In fact, deep learning algorithms have shown to be a better alternative for medical image classification and detection. There are several characteristics of deep learning such as incorporating a large amount of data, the depth of the network, and optimizing hyper-parameters. In addition, the study population involving other image types, for instance, mammogram, ultrasound, and thermogram, were discarded from the analysis. A total of 498 articles were extracted from the Scopus database. After the filtration process on inclusion and exclusion criteria, 488 articles were selected for elements extraction of the articles. After a thorough screening process based on the abstract, 373 articles were finally included for further analysis. Figure 1 shows the flow diagram of the research process.

4. Results and Discussion

4.1. Overview on Document and Source Type

Firstly, the data were tabulated based on the document type, where document type is referred to as a structured document with several valid elements and originality such as article, conference paper, review, and a book chapter. Meanwhile, source type is the source information for the documents, including journals, conference proceedings, and book series. There is a possibility of the abstracts from conference proceedings published twice as in the conference abstract and full journal [75]. Given the fast development in computer science and studies in the deep learning area, proceeding publications were also considered in this bibliometric analysis. Recent studies also showed that proceeding publications do have a significant impact on highly cited publications, especially in terms of citation counts [76,77]. The majority of the publications are scientific articles (48.53%), followed by conference papers (41.55%), conference reviews (4.02%), reviews (3.22%), and book chapters (1.87%) as shown in Table 2. Other document types represent less than 1% of the total publication.

4.2. Publication Growth

The pattern of publication growth is measured based on the published documents in the particular year. Figure 2 represents the publication trends and total mean citations of articles annually from 2014 to 2021. Scopus recorded Wang et al. [51] as the first published document on deep learning for breast cancer towards histopathology images in 2014, and to date, the document has more than 250 citations. Inspired by the rapid development of systems for invasive breast cancer detection, the authors combined a deep learning approach with hand-crafted features to maximize the model performance yet reduced the computational complexity since only light CNN method were implemented. They utilized 326 mitotic nuclei of breast cancer images in three-layer CNN architectures (two pooling layers and a fully connected layer). Since the number of images were low, they also used Synthetic Minority Oversampling Technique (SMOTE) to reduce the biasness during classification. Based on the comparison of several CNN-based methods, the results indicated that false positive (FP) errors were reduced, which showed that CNN was able to classify the images accurately. In 2016, one of the authors, Madabhushi A., collaborated with Janowczyk A. in [78] to analyze the digital pathology images using deep learning methods through segmentation and detection tasks of breast cancer images. They concluded that deep learning can be a reliable method because of the advantage in terms of feature extraction which can be directly extracted from the images. The study also has been cited by 747 documents since the first publication to date. This showed that more researchers are interested in deep learning-related research. Apart from that, Figure 2 also depicted the number of publications that increased steadily between 2015 and 2021, with the peak publications being in 2021, with 118 documents. This indicates the advancements in computing power and imaging technologies lead the researchers to explore the potential of deep learning to provide more promising results for histopathological image analysis [79,80,81]. From a citation perspective, the mean total citations of the documents were highest in 2014 and followed by year 2016; meanwhile, the lowest was for those published in 2021. This is not surprising as the citable years are not long enough after the publication [82].

4.3. Country Network Analysis

The co-authorship network of countries on breast cancer image classification using deep learning resulted in 71 countries from 2014–2021. Table 3 tabulates the top five countries according to their total link strength. The United States is considered a prominent country in scientific publications compared to others. The result is in line with other bibliometric analyses on “breast cancer” [33]. This could be contributed by greater financial support for researchers in the United States and the large population in the United States [83].
Based on a threshold of three publications per country, 35 countries were matched as shown in Figure 3. The size of circle represents the total link strength and lines among the countries, representing the collaboration link between countries. In country network analysis, there are nine different colors which indicate a total of nine clusters formed (distinguished by the colors of red, green, blue, yellow, purple, aqua, orange, brown, and pink). For bibliometric analysis, normally each research constituent (countries, authors, and journals) was clustered using a combination of multidimensional scaling (MDS) and hierarchical clustering (see [84]). In this study, the clustering methods were based on a unified approach proposed by [85] with modularity-based clustering to explore the structure of the network such as social interaction among authors and their countries. For example, Cluster 6 (Aqua) has strong collaboration with other countries such as countries from Cluster 2 (Green) and Cluster 3 (Blue). All countries were connected to each other in the network map.
It is interesting to note that India is one of the countries with a high number of publications; however, the number of citations is far less than the United States and China. This could be explained by the passion of researchers to conduct studies on the topic within the country but the lack of collaboration with other countries. The overlay visualization in Figure 4 focuses on the country collaboration of India. There are total of 16 countries collaborated with India included Iraq, Norway, Saudi Arabia, South Korea, and France. By referring to the line that connected between each country to India, most of the countries started collaborated with India in early 2020. Deep learning methods have achieved great success in breast cancer image classification among researchers in India [86,87,88,89]. This also explains why the number of documents published in India is high but received lower citations, since the timeline between publication year and citable year is not long.

4.4. Author Network Analysis

A total of 1310 authors published on the topic related to breast cancer image classification using deep learning. Among them, 9 authors (0.69%) published at least five documents, 39 authors (2.98%) contributed between three to four publications, and 1262 authors (96.34%) published at most two documents. From Figure 5a, the lines connected between authors shows their cooperation link. For example, a reasonable research link was indicated from close and strong interconnections between the collaboration of Zhang Y., Li X., and Wang X. from Cluster 1 (Red), Wang L. in Cluster 5 (Purple), and Li Z. in Cluster 8 (Brown). The authors Madabhushi, Gilmore, and Zhang S. in Cluster 6 (Aqua) were from the United States, while most authors from Cluster 1 (Red) represented authors from China. This indicated that authors from similar countries are closely linked and more likely to work together. Based on the density visualization (Figure 5b), Madabhushi A., Gilmore H., Li Y., Wang J., Li X., and Zhang Y. led the collaboration in breast cancer histopathology image research.
Table 4 presents the top 10 most productive authors ranked by the total link strength. It is interesting to note that the authors started to work collaboratively and contributed to publications after 2017; hence, the research domain has maintained its growth rate since. The total link strength of authors showed the collaboration closeness among them, which means higher total link strength indicated that more commonly collaboration occurs for the authors. An author from China, Li Y., had the most active collaboration with other authors such as Li L., Zhang H., Xu J., and Wang P., but the result showed that Madabhushi A. was the most highly cited author on the research topic. Some authors such as Xu J. and Gilmore H. had lower total link strength but recorded highly cited publications. This could be explained by referring to their popular publication related to nuclei detection using breast cancer histopathology images that has more than 600 paper citations [90].
Table 5 shows extra information on the research institutes and their research focus ranked based on the number of documents published in 2014–2021. The Case Western Reserve University has the highest number of publications that focuses on the convolutional neural network, digital pathology, image classification. Madabhushi A. from Case Western University has collaborated with authors from various institutes in all the nine documents published; hence, it is not surprising that Madabhushi A. has the highest paper citations. Out of 10 research institutes, four of them are in China while two in Canada and one each are in the United States, India, the Netherlands, and Sweden. This finding implies that convolutional neural network and deep learning-related research has improved in China over these years [91].

4.5. Journal Network Analysis

In journal network analysis, the number of articles published and the number of citations were considered while examining the most prominent journals in the topic of deep learning and breast cancer image classification. The citation analysis of journals resulted in 190 journals for 373 documents. Table 6 gives the top 20 journals published on breast cancer image classification using deep learning. Most publications in the related topic were published in Lecture Notes in Computer Science, Proceedings—International Symposium on Biomedical Imaging, IEEE Access, Scientific Reports and Communications in Computer and Information Science. Based on other indicators, IEEE Transactions on Medical Imaging and Scientific Reports have a significantly higher number of citations, with 703 and 451 citations, respectively. As shown in Figure 6, different node size represents different amounts of publications in the journal. Using a threshold of at least three articles per journal, only 24 journals were mapped in the network.
Research collaboration aims to combine various types of expertise for research output development by linking the knowledge and skills together. Co-authorship networks are commonly used to examine the collaboration patterns and discover the influential authors and organizations [92]. The analysis illustrates the social network structure that exists between individuals or organizations. Recently, the technological breakthrough in the CAD system has helped to improve the computational time of diagnosis and minimize the rate of misdiagnosis during image classification [93,94].
In the analysis, the involvement of the United States, China, and India as the most central countries in the network showed their scientific contribution to breast cancer and deep learning issues globally. The distance between each circle (node) implies the collaboration strength such that the further distance represents less collaboration between countries. Currently, the United States and China have contributed more than 40% of the total publications, and the collaboration strength between these countries is high. According to [34], a high number of publications in both countries are related to the investment of the business sectors in their Research and Development (R&D) expenditure. Apart from that, there is a growing trend of developing countries to engage in research related to the issues. For example, significant performance from China, India, and Pakistan is in line with previous studies that revealed breast cancer is among the important illness and research areas [95,96,97]. Both the developed and developing countries are publishing their research since breast cancer is a global burden issue [98].
Based on the density visualization, Madabhushi A. and Gilmore H. again are the most productive authors and most linked on the research topic. Many authors from various affiliations and countries collaborate with them such as Cruz-Roa A. from Universidad de los Llanos, Columbia [99] and Xu J. from Nanjing University of Information Science and Technology, China [100,101].

4.6. Co-Occurrence Analysis of Author Keywords

For the years 2014 to 2021, a co-occurrence analysis of author keywords was conducted with a minimum of three keyword occurrences as a threshold for the study. Out of 657 keywords, 42 were found to be relevant. There are seven distinct clusters in the results (Figure 7). When two keywords appear together in one article or more, they are more likely to form a cluster. In Figure 7, the co-occurrence network map of keywords is depicted, given that the larger size of the circle, the higher the co-occurrence of keywords. Furthermore, having closer keywords together shows a stronger relationship. The average year of publication of the keywords was determined using colors. Notably, the focus of research from 2018 to 2019 was on biopsy image aspects (Dark blue) such as “histopathology image analysis”, “digital pathology”, “convolutional neural networks”, “whole slide images”, and “computer-aided diagnostics”. Instead, the network map reveals a greater focus on breast cancer classification approaches such as “deep learning”, “transfer learning”, “CNN”, “image classification”, “medical image processing”, and “feature extraction” from 2019 to date.
The top keywords that are identified through co-occurrence analysis is breast cancer and deep learning with 152 and 139 total number of counts, respectively. The result is as expected since breast cancer and deep learning are part of the search keywords for bibliometric analysis. Breast cancer studies received high attention in research related to deep learning. According to Samb et al. [102], chronic illness will lead to 80% human deaths by 2023, which also contributes to global issues, and proper treatments that are aimed at combating the illness may benefit the healthcare system. Specifically, breast cancer is also one of the current leading cause of deaths where the mortality rate is still high, even though the mortality trend has been reduced since 1989 [3]. Hence, researchers focus their work on early detection of breast cancer through deep learning technology [103,104]. This is supported by the overlay visualization in Figure 7, where the research direction aimed at the efficiency of CNN towards image analysis from 2018 to 2020. In 2018, Cruz-Roa et al. [99] proposed a new method based on CNN for histopathology image analysis on whole slide images. They applied the adaptive sampling technique to overcome issues on larger sizes of images.
Spanhol [12] said that a well-described image database is important for CAD system research, and a new histopathology image dataset known as BreakHis is introduced together with some experimental results using CNN models. Meanwhile, in 2019, Ghosh et al. [88] studied on deep learning and image segmentation and revealed that the medical imaging field needs various segmentations such as nuclei segmentation for reliable CNN performance. Alom et al. [63] proposed an Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model based on several criteria such as magnification factors, image resizing, and image augmentation and segmentation. Result showed that the IRCNN model outperformed the state-of-the art method in 2016 using BreakHis dataset. In 2020, Salama et al. [105] introduced a hybrid deep learning method for breast cancer detection using pre-trained models, ResNet50 and VGG16. Theoretically, a promising accuracy rate depends on the amount of data for model training such that a large volume of training samples leads to a better accuracy rate. Since medical images have a limitation on the sample size, she addressed the limitation by utilizing a data augmentation technique and transfer learning which revealed that hand-crafted features and human interface can be discarded. A hybrid ResNet15 model achieved the highest accuracy, 97.98%, as compared to hybrid VGG16 and other models. However, for this deep learning algorithm to be fully established and exploited on a worldwide scale, significant challenges must be overcome. Some discussion on the challenges of deep learning for breast cancer classification using histopathology images is provided in the next section.

4.7. Computational Method for Histopathology Images

In recent years, there has been a growth and development in the use of deep learning algorithms for histopathology image analysis, specifically CNN methods. CNN methods could be used for identifying regions of interest (ROIs), feature extraction, and image classification. The advent of digital histopathology images with CNN methods offers tremendous potential for assisting pathologists with their jobs. Thereby, Table 7 shows a summary of some deep learning algorithms based on CNN methods in histopathology images. The five listed references are from a high-impact journal with over 100 citations.
Cruz-Roa et al. [106] aimed to assess the accuracy and reliability of deep learning algorithms for classifying the digital images into invasive tumor. They offered a novel method for classifying the invasive tumor on whole-slide images using a CNN-based method. In this study, classification performance was assessed across all the images retrieved from the Cancer Institute of New Jersey (CINJ) in the form of whole-slide images. They used three different convolutional network (ConvNet) layers—three-layer ConvNet, four-layer ConvNet, and six-layer ConvNet—as the classifier and compared them with handcrafted features (color, shape, texture, and topography). They concluded that the classification performance related with those features are lower and resulted in higher inconsistency as compared to the ConvNet classifier. Meanwhile, in mitosis detection analysis, the use of handcrafted features solely may result in low accuracy model, whereas CNN methods have issue on high computational cost. Hence, motivated from these drawbacks of handcrafted features and CNN methods, Wang H. et al. [51] introduced a hybrid approach for mitosis detection on ICPR12 dataset. To address these issues, handcrafted features and a CNN method are combined through cascaded ensemble. The results demonstrate that the accuracy of the provided approach still needs to be improved, and a GPU should be used to create a deep multilayer CNN model. Han Z. et al. [24] presented a breast cancer multi-classification technique that makes use of a deep learning model. They implemented a complete recognition approach based on a newly developed class-structure-based deep convolutional neural network (CSDCNN) to provide a consistent and accurate solution for breast cancer classification. They also utilized multi-scale data augmentation and over-sampling approaches to overcome overfitting and unbalanced classes issues. On a large dataset, the proposed CNN model performed admirably.
In Ghosh S. et al. [88], they stated CNN as among the most widely used methods in computer vision. For the segmentation tasks, CNN methods have undergone many basic adjustments to perform effectively. In addition, back-propagation enabled CNN to train a cascaded set of convolutional kernels. It has been greatly improved since then. Generally, they stated that the speed and accuracy of models are important factors in performance evaluation. The speed may be increased through network compression by using depth-wise separable convolutions, kernel factorizations, and a smaller number of spatial convolutions approaches. The popularity of generative adversarial networks (GANs) has recently risen, but there is still some room for improvement in image segmentation. A study by Alom M. Z. et al. [63] demonstrated how deep learning has outperformed state-of-the-art approaches in medical imaging areas. They developed an approach for breast cancer classification known as Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model. This sophisticated DCNN model combines the strengths of the Inception-v4, ResNet, and recurrent CNN (RCNN) with several criteria on data augmentation techniques. Compared with other relevant deep learning algorithms such as inception, RCNN, and residual network, the IRRCNN model offers better performance while utilizing the same or less network parameters.

4.8. Challenges and Future Directions

In this bibliometric analysis, we discovered that deep learning algorithms can be utilized to classify breast cancer histopathology images, given that the model performance (in terms of accuracy) is equal or better as compared to healthcare professionals. However, some parameters must still be considered for a reliable and consistent output. With so much focus on the advancement of deep learning, more individuals are interested in its performance in healthcare.

4.8.1. Large Image Size

In deep learning, image classification frequently utilized small-sized images as an input for the network. Large images have to be resized to fit the network requirement given that a larger size of images leads to a large amount of parameter estimation, computational power, and memory usage. In analysis, whole slide images (WSI) are commonly difficult to be examined, but resizing the images could reduce the information of the cell which leads to less accurate image classification. Therefore, the WSI is often divided into patches (small regions) so that each patch can be evaluated independently. Recently, the findings from Zhou L. et al. [107] demonstrate the benefits of using CNN methods to classify the breast images patch by patch, and the assessment of breast imaging information may yield more accurate and reproducible imaging diagnoses than human interpretation.

4.8.2. Color Variations

For comparable results during analysis, color variation is another issue in deep learning models. Different batches or manufacturers of staining solutions, thickness of tissue sections, staining settings, and scanner models are all sources of variance [49]. Learning without taking color variation into account may degrade the performance of deep learning models. Several techniques have been proposed to deal with the color variation of the images such as color augmentation, color normalization, and grayscale conversion [49]. Grayscale conversion is the simplest method [59], but it may be overlooking critical information on the color representation commonly used by pathologists. Color normalization attempts to change the color values of an image pixel by pixel, using some methods such as color constancy, color deconvolution, and color transfer. Color normalization could be appropriate when the images have identical cell or tissue compositions. However, the utilization of color normalization should be handled carefully because it may reduce the accuracy of the classification algorithm related to histopathology images [108].

4.8.3. Insufficient Data

When there is insufficient data, usually CNN models are less generalized and may lead to an overfitting problem. One approach to avoid the issue is through data augmentation tasks which helps to increase the performance of CNN models in image classification. Recently, automatic approaches to data augmentation, such as data augmentation based on multi degree-of-freedom (DOF) automatic image acquisition, have been presented by Chen L. et al. [109]. It is necessary to assess the physical validity of the created samples and the implications of the several generated problems on the algorithm performance. Several methods for generating synthetic samples using generative adversarial networks have recently been proposed Zhou F. et al. [110]. The generative adversarial network can generate samples in data augmentation tasks rapidly, especially in image-to-image translation [20].

5. Conclusions

The bibliometric analysis study highlights the growing trend of breast cancer and deep learning research globally. This study conducted a bibliometric analysis and visualization of breast cancer image classification using deep learning publications from 2014 to 2021. This study examined some noteworthy findings connected to the related publications. The topic of breast cancer image classification using deep learning has seen a lot of research over the last eight years, with the publications output growing at an exponential rate since 2014. There is a growing interest in breast cancer and deep learning research, which is in response to the pressing demand for urban growth and quality of life. With the technological advancements that have occurred in the last two decades, tremendous progress has been noticed in breast cancer and deep learning studies across all disciplines [93,105].
The main study areas in the realm of breast cancer image classification using deep learning could be recognized based on co-keyword network analysis: (i) breast cancer; (ii) deep learning; (iii) convolutional neural network; (iv) digital pathology; and (v) transfer learning. The theme of the study changed swiftly as time went on, and several fields of breast cancer image classification using deep learning research were thriving at the same time, according to keyword bursts analysis. The histopathology images, invasive ductal carcinoma, and BreakHis dataset have all become new research centers. About 98.54% of authors (n = 1291/1310) were credited in not more than three papers on the issue of breast cancer image classification using deep learning, according to co-authorship analyses. This could indicate that a substantial percentage of authors were new to the field of research. Author collaboration network analysis revealed that Li Y., Madabhushi A., and Gilmore H. were among the most productive authors, the most linked authors, and the most cited authors. This suggests that those authors are pioneers in the field of research. Over the past eight years, deep-learning-related methods, especially CNN, have showed outstanding performance in breast cancer image classification. However, data related to medical images or microscopy images are normally limited due to a small number of patients. A large amount of data is required for training the model effectively. Therefore, some researchers used image segmentation techniques to overcome the problem. Data augmentation can help to increase the number of input images by adding copied images from the original input. The new images are slightly modified using several data augmentation strategies such as rotation, flipping, and scaling.
In this study, some challenges related to the CNN method are discussed, and data insufficiency might be the biggest challenge in medical data for image classification. This is also supported by Komura and Ishikawa [49]: their work stated that a large amount of training data is important for image classification tasks. A vast amount of research has been conducted on CNN methods with several adjustment to reach model efficiency of image classification specifically on breast cancer histopathology images. As discussed in the previous section, recently, some studies revealed that generative adversarial networks (GANs) could be used to generate samples for training datasets, so that issue on data scarcity can be tackled. The implementation of GANs in future studies as a data synthesis option should be further explored to elevate the computational time and improve the performance of the CNN methods.
VOSviewer used country collaboration analysis to divide the 35 countries into nine research strong-linked clusters, led by the United States, China, India, South Korea, and the United Kingdom, respectively. They were also at the forefront of a collaborative effort to classify breast cancer images using deep learning. The United States and China were both ranked in the top two in author collaboration and country collaboration analyses. China, on the other hand, has recently adopted a more cooperative attitude. In fact, China is one of the world’s newest scientific hubs.
This bibliometric study has some limitations to be addressed. First, the data collection was restricted to Scopus’ core collection, with improvements such as “source type” and “languages” being used. Other databases such as PubMed or WoS should have been combined as well. Nonetheless, Scopus is one of the world’s largest and most utilized databases for scientific publication analysis, particularly in the healthcare area. Second, since some recently published papers have low citation frequency, there may still be discrepancies between true research status and our bibliometric analysis results [111]. As a conclusion, the role of deep learning in breast cancer image classification will keep evolving. However, deep learning is not a replacement for pathologists; instead, it will continue to assist them with tools that are both effective and efficient. This bibliometric analysis could be used as a springboard for more specific and in-depth research.

Author Contributions

Conceptualization, S.S.M.K. and C.-Y.L.; methodology, S.S.M.K.; software, S.S.M.K.; resources, C.-Y.L.; writing—original draft preparation, S.S.M.K.; writing—review and editing, C.-Y.L., M.A.A., M.A.A.B., S.A.B., N.R. and M.F.; visualization, S.S.M.K.; supervision, C.-Y.L., M.A.A., M.A.A.B., S.A.B., N.R. and M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the National University of Malaysia (UKM) with the grant number is DIP-2018-039.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

It is with the deepest regret that we record the death of Choong-Yeun Liong on 25 September 2021. His contributions as main person in supervision will be sorely missed. He was a well-loved colleague at the National University of Malaysia (UKM) for many years.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nenclares, P.; Harrington, K.J. The Biology of Cancer. Medicine 2020, 48, 67–72. [Google Scholar] [CrossRef]
  2. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global Cancer Statistics 2018: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [Green Version]
  3. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer Statistics, 2020. CA Cancer J. Clin. 2020, 70, 7–30. [Google Scholar] [CrossRef] [PubMed]
  4. Torre, L.A.; Siegel, R.L.; Ward, E.M.; Jemal, A. Global Cancer Incidence and Mortality Rates and Trends—An Update. Cancer Epidemiol. Biomark. Prev. 2016, 25, 16–27. [Google Scholar] [CrossRef] [Green Version]
  5. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef]
  6. Momenimovahed, Z.; Salehiniya, H. Epidemiological Characteristics of and Risk Factors for Breast Cancer in the World. Breast Cancer Targets Ther. 2019, 11, 151–164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Abdelwahab Yousef, A.J. Male Breast Cancer: Epidemiology and Risk Factors. Semin. Oncol. 2017, 44, 267–272. [Google Scholar] [CrossRef] [PubMed]
  8. Nahid, A.-A.; Kong, Y. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey. Comput. Math. Methods Med. 2017, 2017, 3781951. [Google Scholar] [CrossRef] [PubMed]
  9. Skandalakis, J.E. Embryology and Anatomy of the Breast. In Breast Augmentation; Shiffman, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 3–24. ISBN 978-3-540-78948-2. [Google Scholar]
  10. Bombonati, A.; Sgroi, D.C. The Molecular Pathology of Breast Cancer Progression. J. Pathol. 2011, 223, 308–318. [Google Scholar] [CrossRef] [Green Version]
  11. Gajdosova, V.; Lorencova, L.; Kasak, P.; Tkac, J. Electrochemical Nanobiosensors for Detection of Breast Cancer Biomarkers. Sensors 2020, 20, 4022. [Google Scholar] [CrossRef] [PubMed]
  12. Spanhol, F.A. Automatic Breast Cancer Classification from Histopathological Images: A Hybrid Approach. Ph.D. Thesis, Federal University of Parana, Curitiba, Brazil, 2018. [Google Scholar]
  13. Liu, Y.; Ren, L.; Cao, X.; Tong, Y. Breast Tumors Recognition Based on Edge Feature Extraction Using Support Vector Machine. Biomed. Signal Process. Control 2020, 58, 101825. [Google Scholar] [CrossRef]
  14. Danch-Wierzchowska, M.; Borys, D.; Bobek-Bilewicz, B.; Jarzab, M.; Swierniak, A. Simplification of Breast Deformation Modelling to Support Breast Cancer Treatment Planning. Biocybern. Biomed. Eng. 2016, 36, 531–536. [Google Scholar] [CrossRef]
  15. Mewada, H.K.; Patel, A.V.; Hassaballah, M.; Alkinani, M.H.; Mahant, K. Spectral–Spatial Features Integrated Convolution Neural Network for Breast Cancer Classification. Sensors 2020, 20, 4747. [Google Scholar] [CrossRef] [PubMed]
  16. Kiambe, K.; Kiambe, K. Breast Histopathological Image Feature Extraction with Convolutional Neural Networks for Classification. ICSES Trans. Image Process. Pattern Recognit. (ITIPPR) 2018, 4, 4–12. [Google Scholar] [CrossRef]
  17. Mathew, T.; Kini, J.R.; Rajan, J. Computational Methods for Automated Mitosis Detection in Histopathology Images: A Review. Biocybern. Biomed. Eng. 2021, 41, 64–82. [Google Scholar] [CrossRef]
  18. Zhu, C.; Song, F.; Wang, Y.; Dong, H.; Guo, Y.; Liu, J. Breast Cancer Histopathology Image Classification through Assembling Multiple Compact CNNs. BMC Med. Inform. Decis. Mak. 2019, 19, 198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Valieris, R.; Amaro, L.; de Toledo Osório, C.A.B.; Bueno, A.P.; Mitrowsky, R.A.R.; Carraro, D.M.; Nunes, D.N.; Dias-Neto, E.; da Silva, I.T. Deep Learning Predicts Underlying Features on Pathology Images with Therapeutic Relevance for Breast and Gastric Cancer. Cancers 2020, 12, 3687. [Google Scholar] [CrossRef]
  20. Lagree, A.; Mohebpour, M.; Meti, N.; Saednia, K.; Lu, F.I.; Slodkowska, E.; Gandhi, S.; Rakovitch, E.; Shenfield, A.; Sadeghi-Naini, A.; et al. A Review and Comparison of Breast Tumor Cell Nuclei Segmentation Performances Using Deep Convolutional Neural Networks. Sci. Rep. 2021, 11, 8025. [Google Scholar] [CrossRef]
  21. Choudhary, T.; Mishra, V.; Goswami, A.; Sarangapani, J. A Transfer Learning with Structured Filter Pruning Approach for Improved Breast Cancer Classification on Point-of-Care Devices. Comput. Biol. Med. 2021, 134, 104432. [Google Scholar] [CrossRef]
  22. Kozegar, E.; Soryani, M.; Behnam, H.; Salamati, M.; Tan, T. Computer Aided Detection in Automated 3-D Breast Ultrasound Images: A Survey. Artif. Intell. Rev. 2020, 53, 1919–1941. [Google Scholar] [CrossRef] [Green Version]
  23. Murtaza, G.; Shuib, L.; Abdul Wahab, A.W.; Mujtaba, G.; Mujtaba, G.; Nweke, H.F.; Al-garadi, M.A.; Zulfiqar, F.; Raza, G.; Azmi, N.A. Deep Learning-Based Breast Cancer Classification through Medical Imaging Modalities: State of the Art and Research Challenges. Artif. Intell. Rev. 2020, 53, 1655–1720. [Google Scholar] [CrossRef]
  24. Han, Z.; Wei, B.; Zheng, Y.; Yin, Y.; Li, K.; Li, S. Breast Cancer Multi-Classification from Histopathological Images with Structured Deep Learning Model. Sci. Rep. 2017, 7, 4172. [Google Scholar] [CrossRef] [PubMed]
  25. Rezaeilouyeh, H.; Mollahosseini, A.; Mahoor, M.H. Microscopic Medical Image Classification Framework via Deep Learning and Shearlet Transform. J. Med. Imaging 2016, 3, 044501. [Google Scholar] [CrossRef]
  26. Pizarro, R.A.; Cheng, X.; Barnett, A.; Lemaitre, H.; Verchinski, B.A.; Goldman, A.L.; Xiao, E.; Luo, Q.; Berman, K.F.; Callicott, J.H.; et al. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm. Front. Neuroinform. 2016, 10, 52. [Google Scholar] [CrossRef] [Green Version]
  27. Farjam, R.; Soltanian-Zadeh, H.; Zoroofi, R.A.; Jafari-Khouzani, K. Tree-Structured Grading of Pathological Images of Prostate. In Proceedings of the SPIE 5747, Medical Imaging 2005: Image Processing, San Diego, CA, USA, 29 April 2005; pp. 1–12. [Google Scholar] [CrossRef]
  28. Wang, P.; Hu, X.; Li, Y.; Liu, Q.; Zhu, X. Automatic Cell Nuclei Segmentation and Classification of Breast Cancer Histopathology Images. Signal Process. 2016, 122, 1–13. [Google Scholar] [CrossRef]
  29. Lopez Martinez, R.E.; Sierra, G. Research Trends in the International Literature on Natural Language Processing, 2000–2019—A Bibliometric Study. J. Scientometr. Res. 2020, 9, 310–318. [Google Scholar] [CrossRef]
  30. Ahmi, A.; Mohamad, R. Bibliometric Analysis of Global Scientific Literature on Web Accessibility. Int. J. Recent Technol. Eng. 2019, 7, 250–258. [Google Scholar]
  31. Marczewska, M.; Kostrzewski, M. Sustainable Business Models: A Bibliometric Performance Analysis. Energies 2020, 13, 6062. [Google Scholar] [CrossRef]
  32. de las Heras-Rosas, C.; Herrera, J.; Rodríguez-Fernández, M. Organisational Commitment in Healthcare Systems: A Bibliometric Analysis. Int. J. Environ. Res. Public Health 2021, 18, 2271. [Google Scholar] [CrossRef]
  33. Ozen Çınar, İ. Bibliometric Analysis of Breast Cancer Research in the Period 2009–2018. Int. J. Nurs. Pract. 2020, 26, e12845. [Google Scholar] [CrossRef] [PubMed]
  34. Salod, Z.; Singh, Y. A Five-Year (2015 to 2019) Analysis of Studies Focused on Breast Cancer Prediction Using Machine Learning: A Systematic Review and Bibliometric Analysis. J. Public Health Res. 2020, 9, 65–75. [Google Scholar] [CrossRef]
  35. Joshi, S.A.; Bongale, A.M.; Bongale, A. Breast Cancer Detection from Histopathology Images Using Machine Learning Techniques: A Bibliometric Analysis. Libr. Philos. Pract. 2021, 5376, 1–29. [Google Scholar]
  36. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine Learning for Medical Imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef]
  37. Bengio, Y. Learning Deep Architectures for AI. In Foundations and Trends® in Machine Learning; University of California: Berkeley, CA, USA, 2009; Volume 2, pp. 1–127. [Google Scholar] [CrossRef]
  38. Li, X.; Shen, X.; Zhou, Y.; Wang, X.; Li, T.-Q. Classification of Breast Cancer Histopathological Images Using Interleaved DenseNet with SENet (IDSNet). PLoS ONE 2020, 15, e0232127. [Google Scholar] [CrossRef] [PubMed]
  39. Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. A Dataset for Breast Cancer Histopathological Image Classification. IEEE Trans. Biomed. Eng. 2016, 63, 1455–1462. [Google Scholar] [CrossRef]
  40. Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. BACH: Grand Challenge on Breast Cancer Histology Images. Med. Image Anal. 2019, 56, 122–139. [Google Scholar] [CrossRef]
  41. Asri, H.; Mousannif, H.; Moatassime, H.A.; Noel, T. Using Machine Learning Algorithms for Breast Cancer Risk Prediction and Diagnosis. Procedia Comput. Sci. 2016, 83, 1064–1069. [Google Scholar] [CrossRef] [Green Version]
  42. Bharat, A.; Pooja, N.; Reddy, R.A. Using Machine Learning Algorithms for Breast Cancer Risk Prediction and Diagnosis. In Proceedings of the 2018 3rd International Conference on Circuits, Control, Communication and Computing (I4C), Bangalore, India, 3–5 October 2018; pp. 1–4. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Deng, Q.; Liang, W.; Zou, X. An Efficient Feature Selection Strategy Based on Multiple Support Vector Machine Technology with Gene Expression Data. Biomed Res. Int. 2018, 2018, 7538204. [Google Scholar] [CrossRef] [PubMed]
  44. Kharya, S.; Agrawal, S.; Soni, S. Naive Bayes Classifiers: A Probabilistic Detection Model for Breast Cancer. Int. J. Comput. Appl. 2014, 92, 26–31. [Google Scholar] [CrossRef]
  45. Nahar, J.; Chen, Y.P.P.; Ali, S. Kernel-Based Naive Bayes Classifier for Breast Cancer Prediction. J. Biol. Syst. 2007, 15, 17–25. [Google Scholar] [CrossRef]
  46. Rashmi, G.D.; Lekha, A.; Bawane, N. Analysis of Efficiency of Classification and Prediction Algorithms (Naïve Bayes) for Breast Cancer Dataset. In Proceedings of the 2015 International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT), Mandya, India, 17–19 December 2015; pp. 108–113. [Google Scholar]
  47. Octaviani, T.L.; Rustam, Z. Random Forest for Breast Cancer Prediction. In Proceedings of the AIP Conference Proceedings, Depok, Indonesia, 30–31 October 2018; pp. 1–7. [Google Scholar] [CrossRef]
  48. Elgedawy, M.N. Prediction of Breast Cancer Using Random Forest, Support Vector Machines and Naïve Bayes. Int. J. Eng. Comput. Sci. 2017, 6, 19884–19889. [Google Scholar] [CrossRef]
  49. Komura, D.; Ishikawa, S. Machine Learning Methods for Histopathological Image Analysis. Comput. Struct. Biotechnol. J. 2018, 16, 34–42. [Google Scholar] [CrossRef] [PubMed]
  50. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  51. Wang, H.; Cruz-Roa, A.; Basavanhally, A.; Gilmore, H.; Shih, N.; Feldman, M.; Tomaszewski, J.; Gonzalez, F.; Madabhushi, A. Mitosis Detection in Breast Cancer Pathology Images by Combining Handcrafted and Convolutional Neural Network Features. J. Med. Imaging 2014, 1, 034003. [Google Scholar] [CrossRef] [PubMed]
  52. Shahidi, F.; Daud, S.M.; Abas, H.; Ahmad, N.A.; Maarop, N. Breast Cancer Classification Using Deep Learning Approaches and Histopathology Image: A Comparison Study. IEEE Access 2020, 8, 187531–187552. [Google Scholar] [CrossRef]
  53. Fujita, H. AI-Based Computer-Aided Diagnosis (AI-CAD): The Latest Review to Read First. Radiol. Phys. Technol. 2020, 13, 6–19. [Google Scholar] [CrossRef]
  54. Lin, C.J.; Jeng, S.Y. Optimization of Deep Learning Network Parameters Using Uniform Experimental Design for Breast Cancer Histopathological Image Classification. Diagnostics 2020, 10, 662. [Google Scholar] [CrossRef] [PubMed]
  55. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [Green Version]
  56. Mishra, C.; Gupta, D.L. Deep Machine Learning and Neural Networks: An Overview. IAES Int. J. Artif. Intell. 2017, 6, 66–73. [Google Scholar] [CrossRef]
  57. Nguyen, P.T.; Nguyen, T.T.; Nguyen, N.C.; Le, T.T. Multiclass Breast Cancer Classification Using Convolutional Neural Network. In Proceedings of the 2019 International Symposium on Electrical and Electronics Engineering (ISEE), Ho Chi Minh City, Vietnam, 10–12 October 2019; pp. 130–134. [Google Scholar] [CrossRef]
  58. Bengio, Y.; Lee, H. Editorial Introduction to the Neural Networks Special Issue on Deep Learning of Representations. Neural Netw. 2015, 64, 1–3. [Google Scholar] [CrossRef]
  59. Hirra, I.; Ahmad, M.; Hussain, A.; Ashraf, M.U.; Saeed, I.A.; Qadri, S.F.; Alghamdi, A.M.; Alfakeeh, A.S. Breast Cancer Classification from Histopathological Images Using Patch-Based Deep Learning Modeling. IEEE Access 2021, 9, 24273–24287. [Google Scholar] [CrossRef]
  60. Zuluaga-Gomez, J.; Al Masry, Z.; Benaggoune, K.; Meraghni, S.; Zerhouni, N. A CNN-Based Methodology for Breast Cancer Diagnosis Using Thermal Images. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2021, 9, 131–145. [Google Scholar] [CrossRef]
  61. Hameed, Z.; Zahia, S.; Garcia-Zapirain, B.; Aguirre, J.J.; Vanegas, A.M. Breast Cancer Histopathology Image Classification Using an Ensemble of Deep Learning Models. Sensors 2020, 20, 4373. [Google Scholar] [CrossRef] [PubMed]
  62. Pavithra, P.; Ravichandran, K.S.; Sekar, K.R.; Manikandan, R. The Effect of Thermography on Breast Cancer Detection—A Survey. Syst. Rev. Pharm. 2018, 9, 10–16. [Google Scholar] [CrossRef]
  63. Alom, M.Z.; Yakopcic, C.; Nasrin, M.S.; Taha, T.M.; Asari, V.K. Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network. J. Digit. Imaging 2019, 32, 605–617. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Železnik, D.; Blažun Vošner, H.; Kokol, P. A Bibliometric Analysis of the Journal of Advanced Nursing, 1976–2015. J. Adv. Nurs. 2017, 73, 2407–2419. [Google Scholar] [CrossRef] [PubMed]
  65. Liao, H.; Tang, M.; Luo, L.; Li, C.; Chiclana, F.; Zeng, X.J. A Bibliometric Analysis and Visualization of Medical Big Data Research. Sustainability 2018, 10, 166. [Google Scholar] [CrossRef] [Green Version]
  66. Guo, Y.; Hao, Z.; Zhao, S.; Gong, J.; Yang, F. Artificial Intelligence in Health Care: Bibliometric Analysis. J. Med. Internet Res. 2020, 22, e18228. [Google Scholar] [CrossRef]
  67. Bhattacharya, S. Some Salient Aspects of Machine Learning Research: A Bibliometric Analysis. J. Scientometr. Res. 2019, 8, 85–92. [Google Scholar] [CrossRef]
  68. Aria, M.; Cuccurullo, C. Bibliometrix: An R-Tool for Comprehensive Science Mapping Analysis. J. Informetr. 2017, 11, 959–975. [Google Scholar] [CrossRef]
  69. van Eck, N.J.; Waltman, L. Software Survey: VOSviewer, a Computer Program for Bibliometric Mapping. Scientometrics 2010, 84, 523–538. [Google Scholar] [CrossRef] [Green Version]
  70. Wahid, R.; Ahmi, A.; Alam, A.S.A.F. Growth and Collaboration in Massive Open Online Courses: A Bibliometric Analysis. Int. Rev. Res. Open Distance Learn. 2020, 21, 292–322. [Google Scholar] [CrossRef]
  71. Baas, J.; Schotten, M.; Plume, A.; Côté, G.; Karimi, R. Scopus as a Curated, High-Quality Bibliometric Data Source for Academic Research in Quantitative Science Studies. Quant. Sci. Stud. 2020, 1, 377–386. [Google Scholar] [CrossRef]
  72. Tober, M. PubMed, ScienceDirect, Scopus or Google Scholar—Which Is the Best Search Engine for an Effective Literature Research in Laser Medicine? Med. Laser Appl. 2011, 26, 139–144. [Google Scholar] [CrossRef]
  73. Al-antari, M.A.; Han, S.-M.; Kim, T.-S. Evaluation of Deep Learning Detection and Classification towards Computer-Aided Diagnosis of Breast Lesions in Digital X-Ray Mammograms. Comput. Methods Programs Biomed. 2020, 196, 105584. [Google Scholar] [CrossRef] [PubMed]
  74. Swiderski, B.; Kurek, J.; Osowski, S.; Kruk, M.; Barhoumi, W. Deep Learning and Non-Negative Matrix Factorization in Recognition of Mammograms. In Proceedings of the Eighth International Conference on Graphic and Image Processing (ICGIP 2016), Tokyo, Japan, 29–31 October 2016; pp. 1–7. [Google Scholar] [CrossRef]
  75. Grover, S.; Dalton, N. Abstract to Publication Rate: Do All the Papers Presented in Conferences See the Light of Being a Full Publication? Indian J. Psychiatry 2020, 62, 73–79. [Google Scholar] [CrossRef] [PubMed]
  76. Bar-Ilan, J. Web of Science with the Conference Proceedings Citation Indexes: The Case of Computer Science. Scientometrics 2010, 83, 809–824. [Google Scholar] [CrossRef]
  77. Purnell, P.J. Conference Proceedings Publications in Bibliographic Databases: A Case Study of Countries in Southeast Asia. Scientometrics 2021, 126, 355–387. [Google Scholar] [CrossRef]
  78. Janowczyk, A.; Madabhushi, A. Deep Learning for Digital Pathology Image Analysis: A Comprehensive Tutorial with Selected Use Cases. J. Pathol. Inform. 2016, 7, 29. [Google Scholar] [CrossRef] [PubMed]
  79. Nagpal, K.; Foote, D.; Liu, Y.; Chen, P.H.C.; Wulczyn, E.; Tan, F.; Olson, N.; Smith, J.L.; Mohtashamian, A.; Wren, J.H.; et al. Development and Validation of a Deep Learning Algorithm for Improving Gleason Scoring of Prostate Cancer. NPJ Digit. Med. 2019, 2, 48. [Google Scholar] [CrossRef] [Green Version]
  80. Bejnordi, B.E.; Veta, M.; Van Diest, P.J.; Van Ginneken, B.; Karssemeijer, N.; Litjens, G.; Van Der Laak, J.A.W.M.; Hermsen, M.; Manson, Q.F.; Balkenhol, M.; et al. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer. JAMA J. Am. Med. Assoc. 2017, 318, 2199–2210. [Google Scholar] [CrossRef] [PubMed]
  81. Bera, K.; Schalper, K.A.; Rimm, D.L.; Velcheti, V.; Madabhushi, A. Diagnosis and Precision Oncology. Nat. Rev. Clin. Oncol. 2019, 16, 703–715. [Google Scholar] [CrossRef]
  82. Zakaria, R.; Ahmi, A.; Ahmad, A.H.; Othman, Z.; Azman, K.F.; Ab Aziz, C.B.; Ismail, C.A.N.; Shafin, N. Visualising and Mapping a Decade of Literature on Honey Research: A Bibliometric Analysis from 2011 to 2020. J. Apic. Res. 2021, 60, 359–368. [Google Scholar] [CrossRef]
  83. Bongaarts, J. United Nations Department of Economic and Social Affairs, Population DivisionWorld Family Planning 2020: Highlights, United Nations Publications, 2020. 46 P. Popul. Dev. Rev. 2020, 46, 857–858. [Google Scholar] [CrossRef]
  84. Peters, H.P.F.; van Raan, A.F.J. Co-Word-Based Science Maps of Chemical Engineering. Part II: Representations by Combined Clustering and Multidimensional Scaling. Res. Policy 1993, 22, 47–71. [Google Scholar] [CrossRef]
  85. Waltman, L.; van Eck, N.J.; Noyons, E.C.M. A Unified Approach to Mapping and Clustering of Bibliometric Networks. J. Informetr. 2010, 4, 629–635. [Google Scholar] [CrossRef] [Green Version]
  86. Kumar, A.; Singh, S.K.; Saxena, S.; Lakshmanan, K.; Sangaiah, A.K.; Chauhan, H.; Shrivastava, S.; Singh, R.K. Deep Feature Learning for Histopathological Image Classification of Canine Mammary Tumors and Human Breast Cancer. Inf. Sci. 2020, 508, 405–421. [Google Scholar] [CrossRef]
  87. Zhang, Y.D.; Satapathy, S.C.; Guttery, D.S.; Górriz, J.M.; Wang, S.H. Improved Breast Cancer Classification Through Combining Graph Convolutional Network and Convolutional Neural Network. Inf. Process. Manag. 2021, 58, 102439. [Google Scholar] [CrossRef]
  88. Ghosh, S.; Das, N.; Das, I.; Maulik, U. Understanding Deep Learning Techniques for Image Segmentation. ACM Comput. Surv. 2019, 52, 1–35. [Google Scholar] [CrossRef] [Green Version]
  89. Sudharshan, P.J.; Petitjean, C.; Spanhol, F.; Oliveira, L.E.; Heutte, L.; Honeine, P. Multiple Instance Learning for Histopathological Breast Cancer Image Classification. Expert Syst. Appl. 2019, 117, 103–111. [Google Scholar] [CrossRef]
  90. Xu, J.; Xiang, L.; Liu, Q.; Gilmore, H.; Wu, J.; Tang, J.; Madabhushi, A. Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images. IEEE Trans. Med. Imaging 2016, 35, 119–130. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Chen, H.; Deng, Z. Bibliometric Analysis of the Application of Convolutional Neural Network in Computer Vision. IEEE Access 2020, 8, 155417–155428. [Google Scholar] [CrossRef]
  92. e Fonseca, B.D.P.F.; Sampaio, R.B.; de Fonseca, M.V.; Zicker, F. Co-Authorship Network Analysis in Health Research: Method and Potential Use. Health Res. Policy Syst. 2016, 14, 34. [Google Scholar] [CrossRef] [Green Version]
  93. Wang, P.; Wang, J.; Li, Y.; Li, P.; Li, L.; Jiang, M. Automatic Classification of Breast Cancer Histopathological Images Based on Deep Feature Fusion and Enhanced Routing. Biomed. Signal Process. Control 2021, 65, 102341. [Google Scholar] [CrossRef]
  94. Elmannai, H.; Hamdi, M.; AlGarni, A. Deep Learning Models Combining for Breast Cancer Histopathology Image Classification. Int. J. Comput. Intell. Syst. 2021, 14, 102341. [Google Scholar] [CrossRef]
  95. Chen, K.; Yao, Q.; Sun, J.; He, Z.; Yao, L.; Liu, Z. International Publication Trends and Collaboration Performance of China in Healthcare Science and Services Research. Isr. J. Health Policy Res. 2016, 5, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Ahmad, S.; Ur Rehman, S.; Iqbal, A.; Farooq, R.K.; Shahid, A.; Ullah, M.I. Breast Cancer Research in Pakistan: A Bibliometric Analysis. SAGE Open 2021, 11, 1–17. [Google Scholar] [CrossRef]
  97. Rangarajan, B.; Shet, T.; Wadasadawala, T.; Nair, N.S.; Sairam, R.M.; Hingmire, S.S.; Bajpai, J. Breast Cancer: An Overview of Published Indian Data. South Asian J. Cancer 2016, 5, 86–92. [Google Scholar] [CrossRef]
  98. Fitzmaurice, C.; Abate, D.; Abbasi, N.; Abbastabar, H.; Abd-Allah, F.; Abdel-Rahman, O.; Abdelalim, A.; Abdoli, A.; Abdollahpour, I.; Abdulle, A.S.M.; et al. Global, Regional, and National Cancer Incidence, Mortality, Years of Life Lost, Years Lived With Disability, and Disability-Adjusted Life-Years for 29 Cancer Groups, 1990 to 2017. JAMA Oncol. 2019, 5, 1749–1768. [Google Scholar] [CrossRef] [Green Version]
  99. Cruz-Roa, A.; Gilmore, H.; Basavanhally, A.; Feldman, M.; Ganesan, S.; Shih, N.; Tomaszewski, J.; Madabhushi, A.; González, F. High-Throughput Adaptive Sampling for Whole-Slide Histopathology Image Analysis (HASHI) via Convolutional Neural Networks: Application to Invasive Breast Cancer Detection. PLoS ONE 2018, 13, e0196828. [Google Scholar] [CrossRef]
  100. Lu, C.; Xu, H.; Xu, J.; Gilmore, H.; Mandal, M.; Madabhushi, A. Multi-Pass Adaptive Voting for Nuclei Detection in Histopathological Images. Sci. Rep. 2016, 6, 33985. [Google Scholar] [CrossRef]
  101. Xu, J.; Xiang, L.; Wang, G.; Ganesan, S.; Feldman, M.; Shih, N.N.; Gilmore, H.; Madabhushi, A. Sparse Non-Negative Matrix Factorization (SNMF) Based Color Unmixing for Breast Histopathological Image Analysis. Comput. Med. Imaging Graph. 2015, 46, 20–29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  102. Samb, B.; Desai, N.; Nishtar, S.; Mendis, S.; Bekedam, H.; Wright, A.; Hsu, J.; Martiniuk, A.; Celletti, F.; Patel, K.; et al. Prevention and Management of Chronic Disease: A Litmus Test for Health-Systems Strengthening in Low-Income and Middle-Income Countries. Lancet 2010, 376, 1785–1797. [Google Scholar] [CrossRef]
  103. Ghosh, P.; Azam, S.; Hasib, K.M.; Karim, A.; Jonkman, M.; Anwar, A. A Performance Based Study on Deep Learning Algorithms in the Effective Prediction of Breast Cancer. In Proceedings of the International Joint Conference on Neural Networks, Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar] [CrossRef]
  104. Mahmood, T.; Li, J.; Pei, Y.; Akhtar, F.; Jia, Y.; Khand, Z.H. Breast Mass Detection and Classification Using Deep Convolutional Neural Networks for Radiologist Diagnosis Assistance. In Proceedings of the 2021 IEEE 45th Annual Computers, Software, and Applications Conference, COMPSAC 2021, Madrid, Spain, 12–16 July 2021; pp. 1918–1923. [Google Scholar] [CrossRef]
  105. Salama, W.M.; Elbagoury, A.M.; Aly, M.H. Novel Breast Cancer Classification Framework Based on Deep Learning. IET Image Process. 2020, 14, 3254–3259. [Google Scholar] [CrossRef]
  106. Cruz-Roa, A.; Gilmore, H.; Basavanhally, A.; Feldman, M.; Ganesan, S.; Shih, N.N.C.; Tomaszewski, J.; González, F.A.; Madabhushi, A. Accurate and Reproducible Invasive Breast Cancer Detection in Whole-Slide Images: A Deep Learning Approach for Quantifying Tumor Extent. Sci. Rep. 2017, 7, 46450. [Google Scholar] [CrossRef] [Green Version]
  107. Zhou, L.; Wei, Q.; Dietrich, C.F. Lymph Node Metastasis Prediction from Primary Breast. Radiology 2019, 294, 19–24. [Google Scholar] [CrossRef]
  108. Bianconi, F.; Kather, J.N.; Reyes-Aldasoro, C.C. Evaluation of Colour Pre-Processing on Patch-Based Classification of H&E-Stained Images. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2019; pp. 56–64. [Google Scholar] [CrossRef] [Green Version]
  109. Chen, L.; Yan, N.; Yang, H.; Zhu, L.; Zheng, Z.; Yang, X.; Zhang, X. A Data Augmentation Method for Deep Learning Based on Multi-Degree of Freedom (Dof) Automatic Image Acquisition. Appl. Sci. 2020, 10, 7755. [Google Scholar] [CrossRef]
  110. Zhou, F.; Yang, S.; Fujita, H.; Chen, D.; Wen, C. Deep Learning Fault Diagnosis Method Based on Global Optimization GAN for Unbalanced Data. Knowl.-Based Syst. 2020, 187, 104837. [Google Scholar] [CrossRef]
  111. Stephan, P.; Veugelers, R.; Wang, J. Reviewers are Blinkered by Bibliometrics. Nature 2017, 544, 411–412. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow diagram of the research process.
Figure 1. Flow diagram of the research process.
Healthcare 10 00010 g001
Figure 2. Flow diagram of the research process.
Figure 2. Flow diagram of the research process.
Healthcare 10 00010 g002
Figure 3. Co-authorship network visualization of countries in publication for 2014–2021.
Figure 3. Co-authorship network visualization of countries in publication for 2014–2021.
Healthcare 10 00010 g003
Figure 4. Co-authorship overlay visualization of India.
Figure 4. Co-authorship overlay visualization of India.
Healthcare 10 00010 g004
Figure 5. (a) Co-authorship network visualization of authors in publication for 2014–2021. (b) Density visualization of authors in publication for 2014–2021.
Figure 5. (a) Co-authorship network visualization of authors in publication for 2014–2021. (b) Density visualization of authors in publication for 2014–2021.
Healthcare 10 00010 g005
Figure 6. Citation network visualization of journals in publication for 2014–2021.
Figure 6. Citation network visualization of journals in publication for 2014–2021.
Healthcare 10 00010 g006
Figure 7. Co-occurrence overlay visualization of keywords in publication for 2014–2021.
Figure 7. Co-occurrence overlay visualization of keywords in publication for 2014–2021.
Healthcare 10 00010 g007
Table 1. Summary of image distribution for different magnification factors.
Table 1. Summary of image distribution for different magnification factors.
Magnification40×100×200×400×
Benign652644623588
Malignant1370143713901232
Table 2. Document type from Journal.
Table 2. Document type from Journal.
Document TypeFrequencyPercentage (n = 373)
Article18148.53
Conference paper15541.55
Conference review154.02
Book chapter71.87
Erratum10.27
Note10.27
Review123.22
Short Survey10.27
Total373100.00
Table 3. Document type from journal.
Table 3. Document type from journal.
CountryTLS 1LinksDocumentsCitationsCluster
United States51216822356
China39177115869
India2616806741
South Korea1711124443
United Kingdom1710201902
Germany1410142393
Sweden149101925
Pakistan136131727
Portugal121051573
Australia107132012
1 Total link strength.
Table 4. Document type from journal.
Table 4. Document type from journal.
AuthorTLS 1LinksDocumentsCitationsAffiliationAPY 2
Li Y.1810793Chongqing University, China2018
Madabhushi A.1559912Case Western Reserve University, United States2017
Li X.1510756Chongqing University of Posts and Telecommunications, China2020
Wang J.148431Chongqing University, China2020
Gilmore H.1356891Case Western Reserve University, United States2017
Zhang Y.138638Nanjing University, China2019
Li. L137436Chongqing University, China2020
Xu J.1174485Nanjing University, China2018
Zhang H.1097130East China Jiaotong University, China2019
Li Z.106422Chongqing University of Posts and Telecommunications, China2020
1 Total link strength; 2 average publication year.
Table 5. Research institutes and their research focus.
Table 5. Research institutes and their research focus.
AffiliationResearch FocusDocument
Case Western Reserve UniversityConvolutional neural network, digital pathology, image classification9
Indian Institute of Technology KharagpurFeatures, convolutional neural network, whole slide images7
Shenzhen UniversityImage classification, convolutional neural network6
Radboud University Medical CenterDeep learning, whole slide images6
University of TorontoConvolutional neural network, review analysis6
Karolinska InstituteConvolutional neural network, classification, deep learning5
Xiamen UniversitySegmentation, detection, convolutional neural network5
Sunnybrook Health UniversityDeep learning-based, convolutional neural network, feature extraction5
Southern Medical UniversityDeep learning, cancer staging, classification4
Chongqing UniversityFeatures, convolutional neural network, image classification3
Table 6. Top 5 journals in publication for 2014–2021.
Table 6. Top 5 journals in publication for 2014–2021.
JournalTLS 1LinksDocumentsCit 2
IEEE Access26101048
IEEE Transactions on Medical Imaging24135703
Scientific Reports211610451
Computerized Medical Imaging and Graphics1494114
Medical Image Analysis14105177
Biocybernetics and Biomedical Engineering128541
Lecture Notes in Computer Science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics)12836433
Expert Systems with Applications1183117
International Journal of Advanced Computer Science and Applications117466
Frontiers in Genetics106371
Biomedical Signal Processing and Control96525
International Journal of Imaging Systems and Technology95627
Journal of Medical Imaging863249
Communications in Computer and Information Science741010
Advances in Intelligent Systems and Computing65816
IEEE Journal of Biomedical and Health Informatics646101
Proceedings of the International Joint Conference on Neural Networks654417
Proceedings—International Symposium on Biomedical Imaging4313262
Lecture Notes in Electrical Engineering3230
Cancers1169
1 Total link strength; 2 citations.
Table 7. Five references in publication in high impact journal on CNN methods.
Table 7. Five references in publication in high impact journal on CNN methods.
ReferencesJournalModel/MethodIF 1H-IndexCit 2Year
Cruz-Roa et al. [106]Scientific ReportsCNN/ConvNet4.3802132922017
Wang H. et al. [51]Journal of Medical ImagingCNN and handcrafted features3.610292722014
Han Z. et al. [24]Scientific ReportsStructured based deep CNN4.3802132102017
Ghosh S. et al. [88]ACM Computing SurveysDeep learning, CNN10.2821631262019
Alom M. Z. et al. [63]Journal of Digital ImagingDeep CNN, Inception-v4, ResNet, Recurrent CNN4.056581232019
1 Impact factor; 2 citations.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khairi, S.S.M.; Bakar, M.A.A.; Alias, M.A.; Bakar, S.A.; Liong, C.-Y.; Rosli, N.; Farid, M. Deep Learning on Histopathology Images for Breast Cancer Classification: A Bibliometric Analysis. Healthcare 2022, 10, 10. https://doi.org/10.3390/healthcare10010010

AMA Style

Khairi SSM, Bakar MAA, Alias MA, Bakar SA, Liong C-Y, Rosli N, Farid M. Deep Learning on Histopathology Images for Breast Cancer Classification: A Bibliometric Analysis. Healthcare. 2022; 10(1):10. https://doi.org/10.3390/healthcare10010010

Chicago/Turabian Style

Khairi, Siti Shaliza Mohd, Mohd Aftar Abu Bakar, Mohd Almie Alias, Sakhinah Abu Bakar, Choong-Yeun Liong, Nurwahyuna Rosli, and Mohsen Farid. 2022. "Deep Learning on Histopathology Images for Breast Cancer Classification: A Bibliometric Analysis" Healthcare 10, no. 1: 10. https://doi.org/10.3390/healthcare10010010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop