Next Article in Journal
Novel Method for Detecting Coughing Pigs with Audio-Visual Multimodality for Smart Agriculture Monitoring
Next Article in Special Issue
Development of a Cost-Efficient and Glaucoma-Specialized OD/OC Segmentation Model for Varying Clinical Scenarios
Previous Article in Journal
Prediction of the Released Mechanical Energy of Loaded Lap Shear Joints by Acoustic Emission Measurements
Previous Article in Special Issue
Segmentation of Low-Light Optical Coherence Tomography Angiography Images under the Constraints of Vascular Network Topology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double-Condensing Attention Condenser: Leveraging Attention in Deep Learning to Detect Skin Cancer from Skin Lesion Images

Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2024, 24(22), 7231; https://doi.org/10.3390/s24227231
Submission received: 29 September 2024 / Revised: 10 November 2024 / Accepted: 11 November 2024 / Published: 12 November 2024
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)

Abstract

:
Skin cancer is the most common type of cancer in the United States and is estimated to affect one in five Americans. Recent advances have demonstrated strong performance on skin cancer detection, as exemplified by state of the art performance in the SIIM-ISIC Melanoma Classification Challenge; however, these solutions leverage ensembles of complex deep neural architectures requiring immense storage and computation costs, and therefore may not be tractable. A recent movement for TinyML applications is integrating Double-Condensing Attention Condensers (DC-AC) into a self-attention neural network backbone architecture to allow for faster and more efficient computation. This paper explores leveraging an efficient self-attention structure to detect skin cancer in skin lesion images and introduces a deep neural network design with DC-AC customized for skin cancer detection from skin lesion images. We demonstrate that our approach with only 1.6 million parameters and 0.32 GFLOPs achieves better performance compared to traditional architecture designs as it obtains an area under the ROC curve of 0.90 on the public ISIC 2020 test set and 0.89 on the private ISIC test set, over 0.13 above the best Cancer-Net SCa network architecture design. The final model is publicly available as a part of a global open-source initiative dedicated to accelerating advancement in machine learning to aid clinicians in the fight against cancer. Future work of this research includes iterating on the design of the selected network architecture and refining the approach to generalize to other forms of cancer.

1. Introduction

Skin cancer is the most common type of cancer in the United States and is estimated to affect one in five Americans [1]. In the United States alone, the annual cost of treating skin cancer is estimated at USD 8.1 billion [1]. The main types of skin cancer are squamous cell carcinoma, basal cell carcinoma, and melanoma [2]. Although melanoma is less common than the other skin cancer types, it causes most skin cancer deaths due to its invasive nature [2]. However, the chances of surviving five years after a melanoma skin cancer diagnosis significantly improve if the cancer is detected at an early stage [2]. At later stages, the cancer may have spread to nearby skin and organs leading to more severe consequences for the patient and more expensive and complex treatments [2]. On the other hand, a standard surgery can be used to remove the lesion for early-stage melanoma [2].
The current method to diagnose skin cancer relies on a dermatologist looking at a tissue sample of a skin lesion under a microscope. An example of a melanoma skin cancer and non-skin cancer image is shown in Figure 1. Unfortunately, the current diagnostic process requires a skin biopsy where suspect skin cells are removed from the patient, which can lead to anxiety and physical discomfort. Furthermore, the sole dependence on the human dermatologist results in potential erroneous diagnosis as there is potential biases and high uncertainty in clinical judgement [3,4]. Subsequently, there has been a rise in research focused on computer-assisted diagnosis of melanoma skin cancer [5]. With the introduction of ensemble network architecture designs, a large shift in the high-performing skin cancer detection research has been centered around large computationally expensive ensemble designs. Realistically, these approaches may not be tractable for clinical deployment due to the high storage requirements, which preclude integration of ensemble models into mobile devices and portable dermoscopy devices. Likewise, the computational complexity of ensemble models increases diagnostic latency, which may also limit their clinical utility.
Image classification tasks require non-linear relationships to be captured. Spatial relationships cannot be effectively represented using linear models [6]; therefore, deep learning is preferred for image classification tasks [7]. A substantial problem which affects all neural network architectures is their tendency to overfit to the training data [8]. Despite this limitation, it has been shown that for many complex tasks with non-linear non-convex loss landscapes, most neural network architectures (including the transformer model) tend to benefit from a larger number of parameters, as increasing the depth of the network decreases the inductive bias imposed onto the model [9]. Due to the bias–variance tradeoff, it is also desired for such models to be trained on substantially large datasets to avoid overfitting [8]. Thereby, these criteria result in architectures of significant depth trained on large datasets, necessitating the availability of significant computation resources at the training time. The large number of parameters in such architectures also results in significant storage requirements and demands significant computing resources during inference. Both of these requirements impose constraints on the devices where high-performance neural networks for image classification can be deployed. This is particularly relevant in the context of resource-constrained environments and mobile devices which may be used for clinical skin cancer detection, such as portable dermoscopy devices.
The TinyML space specifically examines machine learning models in resource- constrained environments, and a recent innovation in the TinyML deep learning space is the Double-Condensing Attention Condenser (DC-AC) architecture design, shown to achieve high inference throughput despite its small size [10]. The design of the architecture was based on machine-driven exploration and leverages a DC-AC mechanism [10]. This mechanism introduces another Attention Condenser module and calculates selective attention for the outputs of both condenser modules, which leads to highly condensed feature embeddings [10]. Learning more condensed feature embeddings improves the representational performance and efficiency of the network, which is especially useful for skin cancer as it improves attention on key characteristics associated with cancer while providing a wide field of vision. When compared to other state of the art efficient networks such as MobileOne-S1, this architecture was also shown to outperform the top-1 accuracy for ImageNet with a higher inference throughput and smaller network architecture size [10].
Motivated by the DC-AC architecture, this paper explores leveraging the efficient self-attention structure to detect skin cancer in skin lesion images. We leverage the ISIC 2020 dataset [11] to investigate the performance of this smaller architecture on the unseen ISIC 2020 test set. We find that it has superior performance compared to both the previously released Cancer-Net SCa network architecture designs and MobileViT-S, and hence, we introduce the DC-AC, a deep neural network design customized for skin cancer detection from skin lesion images that is publicly available at https://github.com/catai9/Cancer-Net-SCa-DC-AC (accessed on 11 September 2024).

2. Related Works

2.1. Existing Deep Learning Approaches to Skin Cancer Detection

State of the art performance in the Society for Imaging Informatics in Medicine (SIIM)–International Skin Imaging Collaboration (ISIC) Melanoma Classification Challenge exemplifies the recent advancements in skin cancer detection; however, such deep learning solutions leverage ensemble network architecture designs, requiring immense storage and computation costs. One of the top performers in this challenge used an ensemble network architecture design of EfficientNet B3-B7, se_resnext101, and resenest101 while emphasizing validation methods, model target selection, and the data preparation pipeline to find the optimal solution, which resulted in an area under the ROC curve (AUROC) score of 0.9490 [12]. Similarly, the second place winner achieved an AUROC score of 0.9485 by leveraging an ensemble network architecture design of EfficientNet-B6, EfficientNet-B7, and a model trained on pseudolabeled test data [13,14]. Despite the high performance scores, these approaches are computationally expensive given the high storage costs for large ensemble models and corresponding high inference time.

2.2. Efficient Cancer-Net SCa Architectures

In 2020, Lee et al. released a suite of three Cancer-Net SCa network architecture designs tailored for skin cancer detection and designed using generative synthesis [15]. Notably, the macroarchitecture designs of the three networks are drastically different with Cancer-Net SCa-A and Cancer-Net SCa-B leveraging the projection–expansion–projection–expansion (PEPE) design pattern and Cancer-Net SCa-C using a highly efficient self-attention architecture design with attention condensers [15]. As such, the three designs have different performance–efficiency tradeoffs; Cancer-Net SCa-C has the lowest computational complexity, whereas Cancer-Net SCa-A has the highest sensitivity and NPV, but Cancer-Net SCa-B has the highest accuracy and PPV and lowest architectural complexity [15]. The Cancer-Net SCa architecture designs were reported to achieve strong performance on skin cancer detection with a performance of up to 92.8 on malignant sensitivity [15]. However, when applied to the completely unseen ISIC 2020 test set, the pretrained Cancer-Net SCa architecture designs only have an acceptable ability [16] to discriminate between skin cancer and non-skin cancer images (AUROC = 0.77).

2.3. Double-Condensing Attention Condenser

The Double-Condensing Attention Condenser (DC-AC) architecture design was recently introduced with the benefit of balancing network size and performance [10]. This architecture design builds on the original idea of Attention Condensers. Attention Condensers include condenser layers, embedding layers, and expansion layers [17]. The condenser layers project layer inputs into a lower-dimensional space with the preservation of activations in close proximity of other high activations [17]. The embedding layer produces a condensed embedding, accounting for both local and cross-channel activation dependencies [17]. The expansion layers project the produced embedding back to the original input space dimensions [17]. An attention score (selective attention) is then calculated between the original inputs and the condenser module output projections [17]. Unlike the original Attention Condensers, the DC-AC architecture introduces DC-AC modules, which contain another Attention Condenser module, and calculates selective attention for the outputs of both condenser modules [10]. The condensers are not identical in the sense that one of them applies two embedding layers after projecting the inputs into the lower-dimensional space and leads to the learning of more condensed feature embeddings [10]. By having a columnar network architecture, different branches in the early layers learn disentangled embeddings that are then merged in the deeper layers to gradually increase the channels. Henceforth, wider areas of the original image are then covered by the self-attention blocks. This kind of architecture design is useful for skin cancer as the selective attention and condensation improves the focus on key characteristics associated with cancer and also provides a wider field of vision in the image. This provides a significant advantage in the challenging task of detecting skin cancer, as identifying the class of a specific skin lesion is a difficult problem, as seen in Figure 2, where benign skin lesions can look malignant and malignant skin lesions can look benign.

3. Materials and Methods

3.1. Data Preparation

This study leverages the 2020 open-source SIIM-ISIC Challenge dataset, which comprises 33,126 dermoscopic images consisting of benign lesions (n = 32,542 or 32,120 excluding duplicates) and malignant lesions (n = 584 or 581 excluding duplicates) [11]. Associated metadata for each image describes the patient’s age, biological sex, the lesion’s general anatomic site, a patient identifier, the ground truth classification label (malignant or benign), and the diagnosis if available (including melanoma, nevi, atypical melanocytic proliferation, café-au-lait macule, lentigo NOS, lentigo simplex, solar lentigo, lichenoid keratosis, seborrheic keratosis) [11]. Notably, all images have been collected from 2056 patients, facilitating holistic analysis of all lesions associated with a given patient (with 16 lesions associated with each patient, on average) while accounting for relative differences between lesions, which provides an essential source of context in clinical practice [11]. In total, 428 of these patients have at least one melanoma, while the remaining 1628 patients have no melanoma lesions [11]. Though the dataset contains images of skin lesions, some images also have hair in the image obscuring parts of the lesion and/or have a scale measurement present as seen in Figure 3.

3.2. Double-Condensing Attention Condenser Architecture Design

The DC-AC architecture design was recently introduced as a self-attention neural network backbone which leverages DC-AC modules to achieve high accuracy and efficiency with a minimal computational footprint, supporting its use in TinyML applications [10]. The architecture is shown below in Figure 4. Each of the four computation branches has a series of convolutional layers and DC-AC blocks which enable them to learn distinct embeddings that are later merged.
A key feature of this architecture, which was designed using machine-driven exploration, is the use of DC-AC modules which enable selective self-attention by using both highly condensed feature embeddings, which are derived by condensing the original input features, and self-attention values [10]. These modules build upon the previously introduced Attention Condensers, which are standalone, self-contained modules consisting of condenser layers, embedding layers, and expansion layers [17]. The approach to condensing input features emphasizes activations close to other strong activations, resulting in an efficient selective attention mechanism [17]. The goal of the DC-AC module in comparison with original Attention Condensers is to provide a chance to learn a feature representation which is computationally more efficient than that prepared by a conventional Attention Condenser without a significant loss of information. The DC-AC architecture design also employs a heterogeneous columnar design pattern which supports both independent and complex feature learning due to increasing columnar interactions as the level of abstraction increases [10]. Stability and robustness are achieved by leveraging anti-aliased downsampling (AADS) throughout the network [10].
This study uses the DC-AC architecture as described in [10].

3.3. Training Strategy

In this study, transfer learning was employed by leveraging the DC-AC backbone pre-trained on ImageNet, while fine-tuning was performed using the ISIC 2020 dataset [11], to achieve an efficient network for dermoscopic image analysis and skin cancer detection with a low computational footprint. During training, the entire network was first frozen with the exception of the head. The head was trained on the ISIC 2020 dataset using the AdamW optimizer with 80 epochs, a learning rate of 0.00005, a weight decay of 0.01, and batch balancing. After the head was trained, the entire network was unfrozen and fine-tuned. Fine-tuning of the entire network was performed using the AdamW optimizer with 80 epochs, a learning rate of 0.000005, a weight decay of 0.01, and batch balancing. Cosine annealing was used to dynamically update the learning rates.
Training was performed using a subset of the original dataset consisting of 22,860 images, including 437 malignant lesions. The remainder (9841 images) were used for validation with 144 malignant lesions. A total of 425 duplicate images were removed before the splitting of the dataset into training and validation. The images were divided by patient so that all images from the same patient were in the same set. Given that the training dataset consists of substantially more benign images than malignant images, batch balancing was performed using weighted random sampling to prevent bias towards the benign class. An additional 10,982 images with equal proportions of malignant and benign lesions comprise the test dataset, and these unseen data were used to assess the trained architecture.
Data augmentation was randomly applied to the training data including rotation (up to 90°), translation (up to 10%), scale change (up to 20%), shearing (up to 10°), color jitter (up to 50% brightness, contrast, saturation, and hue adjustment), horizontal flip, vertical flip, and random cropping and resizing (to 160 × 160 pixels). All validation images were resized to 160 × 160 pixels. All training and inference tasks were conducted on an NVIDIA GeForce RTX 3090Ti GPU.

4. Results

Using a train–val–test split, the efficacy of the proposed network architecture design was evaluated based on its performance on the original (non-augmented) data in each split. The SIIM-ISIC Kaggle competition platform [13] was leveraged to compute the performance on the unseen test set as the test labels were not publicly available anywhere else. Two scores are reported on the platform: public and private. The public score refers to the network’s AUROC performance on 30% of the test data with the private score calculating the AUROC performance on the remaining 70% of the test data. On the training and validation split, the DC-AC achieves AUROC scores of 0.9787 and 0.8989, respectively. As seen in Table 1, the test set performance for the DC-AC network architecture design outperforms the Cancer-Net SCa network architecture designs, achieving AUROC scores of at least 0.13 higher than the best Cancer-Net SCa architecture design. Sample skin lesions where the DC-AC design correctly predicted the class and the Cancer-Net SCa suite gave incorrect predictions can be found in Figure 5.

5. Discussion

The test set performance for the self-attention model via attention condensers also outperforms the self-attention model via transformers, MobileViT-S [18]. MobileViT-S is a state of the art vision transformer that is designed to provide state of the art performance balance between computational, architecture complexity, and accuracy. However, in terms of parameters, DC-AC has significantly fewer parameters than the next best network architecture design (1.6 M vs 5.6 M). The DC-AC design also has the smallest number of FLOPs (G), or in other words, the lowest number of floating point operations needed for a single forward pass. As such, the DC-AC design has better computational performance compared to the other models.

Limitations and Future Work

The DC-AC architecture exhibits strong computational performance compared to current state of the art models; however, its performance accuracy and AUROC scores are lower than top-performing models in the SSIM-ISIC Melanoma Classification Challenge that use ensemble designs. Future work to optimize the DC-AC architecture for clinical applications will seek to explore additional architecture designs that could improve classification accuracy while maintaining the DC-AC architecture’s strong computational performance, in order to make the model more competitive with top-performing ensemble models. Additionally, future work includes experimentation with a larger sample size by combining multiple skin lesion datasets, which may improve generalizability and performance accuracy. Likewise, future work will experiment with the use of the DC-AC architecture for other forms of cancer and other medical imaging modalities, to refine this model such that it can be effectively adapted to other cancer applications.

6. Conclusions

In this paper, we investigated leveraging the efficient self-attention structure of a DC-AC architecture to detect skin cancer in skin lesion images. Evaluation using the previously unseen ISIC 2020 test set showed that the proposed DC-AC architecture can increase skin cancer detection performance compared to previously released non-ensemble network architectures. Specifically, using the DC-AC design led to a public AUROC score of 0.9045 and a private AUROC score of 0.8865, which is over 0.13 above the best Cancer-Net SCa network architecture design. Subsequently, future work involves iterating on the design of the selected network architecture and refining the approach to generalize to other forms of cancer and using a larger sample size.

Author Contributions

Conceptualization, C.-e.A.T. and A.W.; methodology, C.-e.A.T. and E.J.; software, C.-e.A.T. and E.J.; validation, C.-e.A.T., E.J. and C.C.; formal analysis, C.-e.A.T., E.J. and C.C.; investigation, C.-e.A.T. and E.J.; resources, C.-e.A.T. and E.J.; data curation, C.-e.A.T. and E.J.; writing—original draft preparation, C.-e.A.T., E.J., C.C. and A.W.; writing—review and editing, C.-e.A.T., E.J., C.C. and A.W.; visualization, C.-e.A.T., E.J. and C.C.; supervision, A.W.; project administration, C.-e.A.T. and A.W.; funding acquisition, A.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in https://www.kaggle.com/c/siim-isic-melanoma-classification/data [11].

Acknowledgments

The authors thank Hayden Gunraj for his invaluable support, exemplifying both his intelligence and personable nature, which greatly enriched the project’s success.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUROCArea under the ROC curve
DC-ACDouble-Condensing Attention Condensers
SIIM–ISICSociety for Imaging Informatics in Medicine-International Skin Imaging Collaboration

References

  1. American Academy of Dermatology Association. Skin Cancer. 2023. Available online: https://www.aad.org/media/stats-skin-cancer (accessed on 16 April 2023).
  2. National Cancer Institute. Skin Cancer (Including Melanoma)—Patient Version. 2023. Available online: https://www.cancer.gov/types/skin (accessed on 2 July 2023).
  3. Celebi, M.E.; Wen, Q.; Iyatomi, H.; Shimizu, K.; Zhou, H.; Schaefer, G. A state-of-the-art survey on lesion border detection in dermoscopy images. Dermoscopy Image Anal. 2015, 10, 97–129. [Google Scholar]
  4. Redelmeier, D.A.; Ferris, L.E.; Tu, J.V.; Hux, J.E.; Schull, M.J. Problems for clinical judgement: Introducing cognitive psychology as one more basic science. Cmaj 2001, 164, 358–360. [Google Scholar] [PubMed]
  5. Budhiman, A.; Suyanto, S.; Arifianto, A. Melanoma Cancer Classification Using ResNet with Data Augmentation. In Proceedings of the 2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 5–6 December 2019; pp. 17–20. [Google Scholar] [CrossRef]
  6. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  7. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef]
  8. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Chapter 5.2. [Google Scholar]
  9. Kaplan, J.; McCandlish, S.; Henighan, T.; Brown, T.B.; Chess, B.; Child, R.; Gray, S.; Radford, A.; Wu, J.; Amodei, D. Scaling laws for neural language models. arXiv 2020, arXiv:2001.08361. [Google Scholar]
  10. Wong, A.; Shafiee, M.J.; Abbasi, S.; Nair, S.; Famouri, M. Faster attention is what you need: A fast self-attention neural network backbone architecture for the edge via double-condensing attention condensers. arXiv 2022, arXiv:2208.06980. [Google Scholar]
  11. Rotemberg, V.; Kurtansky, N.; Betz-Stablein, B.; Caffery, L.; Chousakos, E.; Codella, N.; Combalia, M.; Dusza, S.; Guitera, P.; Gutman, D.; et al. A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data 2021, 8, 34. [Google Scholar] [CrossRef] [PubMed]
  12. Ha, Q.; Liu, B.; Liu, F. Identifying melanoma images using efficientnet ensemble: Winning solution to the siim-isic melanoma classification challenge. arXiv 2020, arXiv:2010.05351. [Google Scholar]
  13. Zawacki, A.; Helba, B.; Shih, G.; Weber, J.; Elliott, J.; Combalia, M.; Kurtansky, N.; Codella, N.; Culliton, P.; Rotemberg, V. SIIM-ISIC Melanoma Classification. 2020. Available online: https://kaggle.com/competitions/siim-isic-melanoma-classification (accessed on 14 June 2023).
  14. Pan, I. SIIM-OSIC Melanoma Classification: 2nd Place. 2023. Available online: https://github.com/i-pan/kaggle-melanoma (accessed on 25 August 2020).
  15. Lee, J.R.H.; Pavlova, M.; Famouri, M.; Wong, A. Cancer-Net SCa: Tailored deep neural network designs for detection of skin cancer from dermoscopy images. BMC Med. Imaging 2022, 22, 143. [Google Scholar] [CrossRef] [PubMed]
  16. Mandrekar, J.N. Receiver operating characteristic curve in diagnostic test assessment. J. Thorac. Oncol. 2010, 5, 1315–1316. [Google Scholar] [CrossRef] [PubMed]
  17. Wong, A.; Famouri, M.; Pavlova, M.; Surana, S. Tinyspeech: Attention condensers for deep speech recognition neural networks on edge devices. arXiv 2020, arXiv:2008.04245. [Google Scholar]
  18. Mehta, S.; Rastegari, M. MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer. arXiv 2021, arXiv:2110.02178. [Google Scholar]
Figure 1. Example skin lesion images.
Figure 1. Example skin lesion images.
Sensors 24 07231 g001
Figure 2. Sample skin lesions that look like the other class.
Figure 2. Sample skin lesions that look like the other class.
Sensors 24 07231 g002
Figure 3. Sample images that include additional elements beyond the skin lesion.
Figure 3. Sample images that include additional elements beyond the skin lesion.
Sensors 24 07231 g003
Figure 4. DC-AC architecture with condenser layers (orange), embedding layers (yellow), and expansion layers (blue) comprising DC-AC modules [10]. The numbers each layer is annotated with correspond to the depth dimension of the layer. DC-AC modules leverage an additional condensation step before selective attention is applied, leading to a more computationally efficient feature representation.
Figure 4. DC-AC architecture with condenser layers (orange), embedding layers (yellow), and expansion layers (blue) comprising DC-AC modules [10]. The numbers each layer is annotated with correspond to the depth dimension of the layer. DC-AC modules leverage an additional condensation step before selective attention is applied, leading to a more computationally efficient feature representation.
Sensors 24 07231 g004
Figure 5. Sample skin lesions where the DC-AC design gave a correct prediction but the Cancer-Net SCa suite was incorrect.
Figure 5. Sample skin lesions where the DC-AC design gave a correct prediction but the Cancer-Net SCa suite was incorrect.
Sensors 24 07231 g005
Table 1. Network architecture design performance on the ISIC 2020 test dataset with the network architecture design that achieves the best test scores bolded.
Table 1. Network architecture design performance on the ISIC 2020 test dataset with the network architecture design that achieves the best test scores bolded.
Network Architecture DesignParam. (M)FLOPs (G)Public ScorePrivate Score
DC-AC1.600.320.90450.8865
MobileViT-S [18]5.602.030.84480.8566
Cancer-Net SCa-A [15]13.654.660.75380.7327
Cancer-Net SCa-B [15]0.800.430.76970.7430
Cancer-Net SCa-C [15]1.190.400.73700.7333
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tai, C.-e.A.; Janes, E.; Czarnecki, C.; Wong, A. Double-Condensing Attention Condenser: Leveraging Attention in Deep Learning to Detect Skin Cancer from Skin Lesion Images. Sensors 2024, 24, 7231. https://doi.org/10.3390/s24227231

AMA Style

Tai C-eA, Janes E, Czarnecki C, Wong A. Double-Condensing Attention Condenser: Leveraging Attention in Deep Learning to Detect Skin Cancer from Skin Lesion Images. Sensors. 2024; 24(22):7231. https://doi.org/10.3390/s24227231

Chicago/Turabian Style

Tai, Chi-en Amy, Elizabeth Janes, Chris Czarnecki, and Alexander Wong. 2024. "Double-Condensing Attention Condenser: Leveraging Attention in Deep Learning to Detect Skin Cancer from Skin Lesion Images" Sensors 24, no. 22: 7231. https://doi.org/10.3390/s24227231

APA Style

Tai, C.-e. A., Janes, E., Czarnecki, C., & Wong, A. (2024). Double-Condensing Attention Condenser: Leveraging Attention in Deep Learning to Detect Skin Cancer from Skin Lesion Images. Sensors, 24(22), 7231. https://doi.org/10.3390/s24227231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop