Next Article in Journal
Secant-Type Iterative Classes for Nonlinear Equations with Multiple Roots
Next Article in Special Issue
UCINet: A Multi-Task Network for Umbilical Coiling Index Measurement in Obstetric Ultrasound
Previous Article in Journal
Improved Active Disturbance Rejection Speed Tracking Control for High-Speed Trains Based on SBWO Algorithm
Previous Article in Special Issue
Light Propagation and Multi-Scale Enhanced DeepLabV3+ for Underwater Crack Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Few-Shot Breast Cancer Diagnosis Using a Siamese Neural Network Framework and Triplet-Based Loss

Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, 21000 Split, Croatia
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(9), 567; https://doi.org/10.3390/a18090567
Submission received: 14 July 2025 / Revised: 29 August 2025 / Accepted: 3 September 2025 / Published: 8 September 2025
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))

Abstract

Breast cancer is one of the leading causes of death among women of all ages and backgrounds globally. In recent years, the growing deficit of expert radiologists—particularly in underdeveloped countries—alongside a surge in the number of images for analysis, has negatively affected the ability to secure timely and precise diagnostic results in breast cancer screening. AI technologies offer powerful tools that allow for the effective diagnosis and survival forecasting, reducing the dependency on human cognitive input. Towards this aim, this research introduces a deep meta-learning framework for swift analysis of mammography images—combining a Siamese network model with a triplet-based loss function—to facilitate automatic screening (recognition) of potentially suspicious breast cancer cases. Three pre-trained deep CNN architectures, namely GoogLeNet, ResNet50, and MobileNetV3, are fine-tuned and scrutinized for their effectiveness in transforming input mammograms to a suitable embedding space. The proposed framework undergoes a comprehensive evaluation through a rigorous series of experiments, utilizing two different, publicly accessible, and widely used datasets of digital X-ray mammograms: INbreast and CBIS-DDSM. The experimental results demonstrate the framework’s strong performance in differentiating between tumorous and normal images, even with a very limited number of training samples, on both datasets.

1. Introduction

Breast cancer is one of the most frequently diagnosed cancer types worldwide and one of the prevailing causes of cancer-related deaths. According to the World Health Organization’s International Agency for Research on Cancer (IARC) reports, in 2022, an estimated 2.3 million breast cancer incidences were recorded globally, resulting in 669,418 deaths. Though men are not immune to this disease, studies show that females are 100 times more likely to suffer from it than men. A more recent study by the American Cancer Society [1] predicted that in 2025, breast cancer will remain the most common form of cancer among women, accounting for approximately 31% of all female cancers. In addition, female breast cancer incidence rates have been continuously increasing since the mid-2000s by 1% per year overall, a trend that has been at least in part attributed to changing risk factors, such as increased excessive body weight [2].
Breast cancer can be classified as either benign, which is considered to be non-hazardous and non-life-threatening, or malignant. It begins with unnatural cell growth in the lining of the breast and has the potential to spread to neighboring tissues quite rapidly. The nuclei of malignant tissue are often considerably larger than those found in normal tissues, which can be fatal in advanced stages. If the cancer is found early on, before it expands to a size of 10 mm, the patient has an 85% probability of going into complete remission. Therefore, timely detection and accurate prediction of breast cancer are of utmost importance for improving patient outcomes, treatment planning, and survival rates.
The availability of appropriate screening technologies is crucial for spotting the first signs of breast cancer on time. The number one tool for detecting breast cancer in its earliest stages is mammography. Mammography screening has been proven effective in reducing breast cancer mortality rates [3]. However, detecting subclinical breast cancer through screening mammography poses significant challenges. Tumors often occupy only a small fraction of the breast image; for instance, a full-field digital mammogram (FFDM) typically comprises 4000 × 3000 pixels, while a region of interest (ROI) indicating potential malignancy can be as small as 100 × 100 pixels. As a result, despite its advantages, screening mammography is overly susceptible to a risk of false negatives and overdiagnosis, where breast cancer that would not develop into clinical cancer during a woman’s lifetime is identified on screen. Numerous factors—such as fatigue, eye strain, and varying levels of experience among professionals, including doctors and radiologists, who analyze images—can affect the mammography’s diagnostic accuracy. To enhance the predictive accuracy of screening mammography, computer-assisted detection (CADe) and diagnosis (CADx) software [4] have been in clinical use since the 1990s. Unfortunately, earlier versions of these systems did not significantly improve diagnostic performance [5], and the progress was stalled for well over a decade following their installment.
In recent years, the remarkable successes of machine learning, especially deep learning—which have revolutionized the field of computer vision with a wide range of applications, from image classification and visual object detection to semantic segmentation—have attracted much attention in the medical community. Deep learning technology shows great potential in assisting health professionals, in enhancing mammogram interpretation accuracy and supporting clinical decision-making, thus improving patient outcomes [6,7].
It is a well-known fact that deep learning algorithms generally require large amounts of training data to reach their optimal performance level [8]. This is a huge pitfall, which may hinder their performance, as assembling comprehensive mammography databases with ROI annotations involves considerable labor and time. Indeed, there are only a handful of publicly available mammography databases that are fully annotated, while larger datasets often merely indicate the cancer status of each image [9]. Some studies [10,11] have sought to train deep learning algorithms using whole mammograms without relying on any annotations. Nonetheless, it remains unclear whether these algorithms can effectively identify clinically significant lesions and make predictions based on the relevant sections of the mammograms.
Distinguishing between benign and malignant tumorous cases poses another set of challenges within this domain, which stem from the fact that diverse breast abnormalities—including masses, architectural distortions, and calcifications—come up in all the shapes and sizes. Effective breast cancer detection requires that the model maintains a high level of accuracy in recognizing intricate patterns, avoiding such pitfalls as over-focusing on specific datasets or mistakenly categorizing non-cancerous formations as concerning. Moreover, in clinical settings, instances of malignancy are often outnumbered by benign cases, resulting in heavily imbalanced, skewed datasets. This discrepancy causes detection models to favor non-cancerous outcomes, subsequently diminishing their effectiveness in identifying genuine cancer cases. As a result, training these models becomes more complex, and specialized methodologies—such as data augmentation, synthetic data generation, or cost-sensitive learning—need to be applied to enhance performance and achieve more decisive results.
To alleviate the data inconsistency and deficiency problem, the current study introduces a reliable metric-based few-shot deep learning framework for the diagnosis of breast cancer patients with a limited amount of training mammograms. Few-shot learning is a form of meta-learning, which is a paradigm used to describe a variety of techniques that aim to improve adaptation through transferring generic and accumulated knowledge (meta-data) from prior experiences with few data points to adapt to new tasks quickly, without requiring training from scratch. More precisely, few-shot learning requires only a small number n of samples for every given image class, to prepare (train) the model that in turn can classify unseen images in the future [12,13]. This style of learning corresponds to what we normally think of as true intelligence. For example, a person can recognize someone’s face after only seeing them a few times, and this ability scales to thousands of different faces.
At present, few if any comparable studies treating breast cancer diagnosis as a k-way, n-shot classification problem—where k and n represent the number of class labels and data samples used for model training—have been published. Due to its recent considerable successes in facilitating few-shot learning (particularly in one-shot scenario), we employ an idea of a Siamese network to learn an embedding space in which learning is efficient for few-shot samples. Specifically, a Siamese network is a type of network architecture that contains two or more identical deep convolutional neural subnetworks used to generate feature embeddings from input images, which are then contrasted to verify the similarity between them. We consider three diverse fine-tuned pre-trained CNN models— namely, GoogLeNet, ResNet-50, and MobileNetV3—and examine their effectiveness as backbone encoder sub-networks to obtain unbiased feature representations. In the effort to further improve the classification margin, we replace the traditional binary cross-entropy (CE) loss with a triplet-based loss function. Triplet loss offers significant advantages by bringing intraclass samples closer together while pushing interclass samples further apart in the embedding space. During the training process, triplets are constructed by selecting anchor images, positive images belonging to the same class, and negative images belonging to different classes. The triplet loss is then jointly optimized with the multi-task loss. By integrating these components, the network is capable of learning enhanced classification margins, which ultimately results in an ameliorated performance. The efficacy of the proposed framework is validated using two publicly available mammogram images datasets: INbreast and the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDMS).
In summary, the salient contributions of this research are outlined below:
(a)
Propose a framework leveraging few-shot deep metric learning techniques for breast cancer diagnosis using whole mammogram images.
(b)
Design and implement a Siamese network model, wielding a triplet-based loss function, for the generation of bias-free feature encoding vectors from the input mammograms.
(c)
Examine the efficacy of the proposed framework with a limited dataset across multiple domains, from binary to multi-class classification.
The remainder of this paper is laid out as follows. Section 2 provides a concise overview of relevant literature. Section 3 describes the proposed models and outlines the methodology and datasets utilized in this research. The experimental setup and results are presented and discussed in Section 4 and Section 5. Finally, Section 6 summarizes the paper and reflects on potential directions for future research.

2. Related Work

In recent years, deep learning has become a common tool, widely applied in breast cancer screening. Moreover, studies have indicated that these advanced methods can diagnose breast cancer up to 12 months earlier than traditional clinical approaches [14]. Furthermore, deep learning excels at identifying the most relevant features that are best suited to tackle the said issue. This section will provide a comprehensive overview of deep learning-based techniques specifically designed for analyzing mammography images to detect breast cancer. Several deep learning models have been utilized for this purpose, which include the convolutional neural networks (CNN), regional convolutional neural networks (R-CNN), generative adversarial networks (GAN), and vision transformers (ViT).
CNN [15] is generally regarded as the leading deep-learning technique employed for breast cancer detection. CNN is a class of deep neural networks, designed to automatically and adaptively filter inputs for useful information through back-propagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers, stacked on top of each other. Since mammography data is not available in abundance, a large proportion of studies resort to a transfer learning approach, as they attempt to exploit existing previously trained CNNs in a way that facilitates the reuse of their learned weights and apply them to the task of breast cancer screening by fine-tuning their fully connected layers only. In [16], the authors selected three deep learning classifiers—namely, regular CNN, ResNet-50, and Inception-v2—and evaluated their diagnostic performances using DDSM and INbreast datasets. Khamparia et al. [17] proposed a hybrid transfer learning model—a fusion of modified VGG and ImageNet—which yielded an accuracy of 94.3% on DDSM dataset. Ragab et al. [18] also used transfer learning with AlexNet architecture, but they replaced the last layer, responsible for final classification, with a Support Vector Machine (SVM) classifier. Their proposed architecture achieved an accuracy of 87.2% on the CBIS-DDSM dataset. A comparative study on mammogram classification performance of different networks on small datasets is reported in [19]. Apart from directly using on-the-shelf models, other researchers further sought more effective transfer learning methods to improve the pre-training learning process and fully utilize the knowledge learned from the pre-training dataset [20].
Several studies adopted and modified off-the-shelf end-to-end detectors, which take as input the whole mammography image and output bounding box coordinates for lesions with scores indicating the likelihoods of different lesion types. In particular, Ribli et al. [21] used Faster-RCNN to detect and perform classification on the INbreast and CBIS-DDSM datasets. RCNN, as their name indicates, combine a convolutional neural network architecture with specialized components aimed at detecting, localizing, and classifying objects within images. A key feature of these models is the Region Proposal Network (RPN), which functions as a specialized branch of convolutional layers positioned atop the final convolutional layer of the original network. This RPN is specifically trained to identify and localize objects in an image, independent of their classes. The system developed by Ribli et al. achieved a remarkable detection rate of 90% for malignant lesions in the INbreast dataset, while maintaining an impressively low rate of only 0.3 false positives per image. Another study by Antari et al. [22] presented a completely integrated CAD system, comprising of You-Only-Look-Once (YOLO) regional network for detection, a full resolution CNN (FrCNN) for segmentation, and a deep CNN for classification of breast lesions. The system was evaluated on the INBreast dataset, producing an overall accuracy of 95.64%.
Another important deep learning model used for breast cancer detection is the GAN. GAN [23] is a deep learning-based generative model comprising two sub-models: the generator model, which is trained to generate new samples, and the discriminator model that tries to classify samples as either real (from the domain) or fake (generated). During training, the two models compete against each other in a zero-sum game, where one agent’s gain is another agent’s loss, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible samples. In the work [24], the authors introduced DiaGRAM (Deep GeneRAtive Multi-task), an innovative end-to-end system that leverages the capabilities of GAN alongside CNN, to improve mammogram classification performance. Their approach utilizes GAN to enhance feature learning by extracting useful features applicable to both the discriminative tasks—i.e., patch and image classification—and for the GAN’s generative task, which involves distinguishing between the real and generated patches. This dual functionality ensures that the learned features effectively capture the essential data characteristics, thereby subsiding the classification efforts. In a separate study, Singh et al. [25] proposed a conditional GAN specifically designed to segment breast tumors within an ROI in mammograms. Their generative network is adept at identifying the tumor areas and generating the binary masks that outline these regions. In turn, the adversarial network is trained to differentiate between actual (ground truth) and synthetic segmentations, thus compelling the generative network to produce binary masks that closely emulate the real-world representations. In the second stage, a shape descriptor based on a CNN is utilized to classify the generated binary masks into four breast tumor shapes (i.e., irregular, lobular, oval, and round). The proposed shape descriptor was trained on the DDSM database, achieving an overall accuracy of 80%, which outperforms the current state-of-the-art. On the other hand, Guan and Loew [26] employed GAN as a data augmenting device to generate synthetic mammographic images. Though these generated images are not exactly like the original ones, they can retain some of the essential features, structures, or patterns of the ROIs in the original images. The obtained results demonstrate that to classify the normal and abnormal ROIs from DDSM, adding GAN ROIs to the training data yields approximately 3.6% better classification performance than using the affine transformation augmented ROIs.
When analyzing mammograms, the previously described techniques in the literature mostly tend to focus on specific regions (patches) where tumors are suspected, simultaneously disregarding the rest of the image. This targeted approach can, however, cause them to overlook significant details, which potentially could have been revealed if the entire image was examined at once. Due to its ability to surpass the limitations of models focusing only on a small portion of an image, ViT has recently gained prominence in the field of computer vision, offering encouraging results in terms of accuracy, efficiency, and the aptitude to capture complex image features.
The ViT builds upon the underlying concept of the original transformer architecture, which was initially developed for text processing. By implementing a few adjustments to accommodate different data types, the ViT applies transformer methodology to the realm of images. This model utilizes various tokenization and embedding strategies, yet its general architecture is the same as that of traditional transformers. ViT is characterized by weaker inductive biases, which allows it to scale effectively with much larger datasets compared to CNN. This scalability comes with a hefty price, though. ViT generally requires a substantial amount of training data to achieve optimal performance. To mitigate this challenge, researchers have started to explore hybrid approaches that combine convolutional layers with ViT, thereby enhancing performance even when working with limited image datasets. Additionally, strategies such as transfer learning and self-supervised learning have been extensively utilized to alleviate data constraints.
Recent comparisons between transformer-based models and CNN—pertaining to mammographic image interpretation—have yielded varying results, largely influenced by differences in experimental designs and model architectures [27,28,29,30]. Most research has focused on comparing ViT to CNN for single-view image classification in a transfer learning setting. Notably, architectures like the Data-efficient Image Transformer (DeiT) and Swin transformer emerge as strong contenders for high-resolution medical image processing [30,31,32,33]. The Swin model, in particular, stands out due to its hierarchical architecture, which provides advantages in computational efficiency. Still, current research has not demonstrated that ViT consistently outperforms CNN in every scenario, particularly when it comes to low-dimensional and few-shot medical image processing [34].
Nevertheless, the observed performance gap between ViT and CNN on established mammography datasets, including CBIS-DDSM, tends to be minimal or even slightly in favor of CNN. For instance, research conducted by Cantone et al. [32] revealed that the Swin-v2 transformer was the only model to achieve competitive results that improved with higher input resolutions, suggesting a benefit in leveraging locality bias. Additionally, Miller et al. [28] demonstrated that in a self-supervised framework, ViT pre-trained with masked autoencoder techniques exhibited subpar performance compared to CNNs that were pre-trained using contrastive self-supervised methods.

3. Materials and Methods

3.1. Datasets Description

To demonstrate the efficacy of our few-shot meta-learning approach for breast cancer diagnosis, we assess the proposed Siamese network model’s performance using two publicly available mammogram datasets: CBIS-DDSM and INbreast.
CBIS-DDSM [35] is an enhanced and standardized version of the DDSM dataset that encompasses a grand total of 10,239 images of normal, benign, and malignant cases with verified pathology information. The data were selected and curated by trained mammographers, and the images are available in the DICOM format. Updated ROI segmentation and bounding boxes, plus pathological diagnosis for training data, are also included. It should be noted that the image data for this collection is structured such that each participant has multiple patient IDs. This makes it appear as though there are 6671 patients according to the DICOM metadata, but there are only 1566 actual participants in the cohort.
INbreast database [36] comprises FFDM images, which have different intensity profiles compared with digitized film mammograms from the CBIS-DDSM. Mammography images in this dataset were originally collected from Centro Hospitalar de S. Joao [CHSJ], Breast center, in Porto, Portugal, from August 2008 to July 2010. The dataset includes 410 images of both views—i.e., mediolateral oblique (MLO) view and craniocaudal (CC) view—from 115 patients. In 90 of the 115 cases, the malignancy was detected in both breasts. All images are stored in DICOM format. Additionally, the INbreast database features the BI-RADS [37] assessment categories assigned by radiologists. The categories are defined as follows: 0 (incomplete exam), 1 (no findings), 2 (benign), 3 (probably benign), 4 (suspicious), 5 (highly suggestive of malignancy), and 6 (known biopsy-proven cancer). Due to the absence of reliable pathological confirmation of malignancy within the database, we designated all images with BI-RADS 1 and 2 as negative, while BI-RADS 4, 5, and 6 were classified as positive. Notably, we excluded 12 patients and 23 images marked as BI-RADS 3, as this assessment is commonly not provided during screening.

3.2. Siamese Network Architecture

A Siamese network contains two or more identical base neural networks that share the same architecture and weights. Each of the sub-networks processes a different input image and converts it into a corresponding feature embedding, allowing for effective comparison and analysis of the images. The subnetwork outputs are joined by an energy function at the top to produce a similarity score among input images. The aim is to learn a similarity model such that images of the same class will have embeddings that contrast significantly with those obtained when an image of a different class is fed. The fact that sub-networks’ weights are tied together ensures that two highly similar images will be mapped to nearly identical positions in the embedding space. This occurs because each sub-network applies the same function to its inputs. Furthermore, the symmetric nature of the Siamese network means that when distinct images are presented to twin sub-networks, the top conjoining layer will consistently calculate the same metric as it would if it were processing the same images through different branches.
Let’s assume that we have randomly sampled three images (triplets) from the training data: where two images, anchor A and positive P , are taken from the same class, whereas the third image, negative N , belongs to a different class. The formation of triplets allows the Siamese network to learn how to differentiate between different samples. During training, each of the input images is individually fed to one of the sub-networks: positive to the left sub-network, anchor to the middle sub-network, and negative to the right sub-network, as shown in Figure 1. (It should be noted that the Siamese network, although depicted as having separate branches, essentially has a single base encoder that sequentially extracts features from anchor, positive and negative images.) The obtained feature embeddings f A , f P , and f N , generated by the sub-networks through flattening of the last layer, are subsequently fed to an energy function E that measures the similarity of theanchor with both positive and negative image. We use the squared L2 distance metric as our energy function, which can be expressed as follows:
E A , P = f A f P 2 2 ,
E A , N = f A f N 2 2 .
The value of the energy function will be smaller if the images are similar and vice versa. For classification purposes, the pair-wise distance is converted to a probability p , which indicates whether the input images belong to the same target class or not.

3.3. Loss Function

Through the use of an appropriate loss function, the base encoder sub-network can be optimized to produce better embeddings of the input images. The aim is to maximize interclass and minimize intraclass distance by ensuring that the distance between the anchor-positive pair of images is smaller than the distance between the anchor-negative pair of images for each triplet, i.e.:
f A f P 2 2 + α < f A f N 2 2
where α is the margin added to the difference between the pair distances, which dictates how dissimilar an embedding must be to be considered an alien class.

3.3.1. Triplet Loss

One way to achieve this is by utilizing a margin-based triplet loss function, originally introduced in FaceNet by Google [38]. Mathematically, this loss function is formulated as follows:
L t r i = max f A f P 2 2 f A f N 2 2 + α , 0
According to [39], better results are obtained when the loss function is replaced by a simple mean square error (MSE) on the soft-max result so that the loss is:
L t r i d + , d = d + , d α 2 2
where
d + = e f A f P 2 2 e f A f P 2 2 + e f A f N 2 2
and
d = e f A f N 2 2 e f A f P 2 2 + e f A f N 2 2
From here, we have a differentiable loss term that can easily be used with the back-propagation algorithm. Note that a “good” model will learn a usefull representation when L t r i d + , d 0 which happens if and only if f A f P 2 2 f A f P 2 2 0 .
The base encoder sub-networks typically contain a large number of parameters, which means that a large number of triplets must be sampled from the training data so that a robust model can be learned. However, sampling all possible triplets from the training dataset can quickly become intractable, since the majority of those samples may produce small costs in (3), resulting in slow convergence [38]. In order to ensure fast convergence, it is crucial to select triplets that violate the triplet constraint in (3). This means that, given anchor image, we want to select the positive image which is further away than any of the other images (hard positive), i.e., arg max P f A f P 2 2 . Similarly, we want to select the negative image, such that the distance between the anchor and chosen negative is the least than the rest of the images (hard negative), i.e., arg min N f A f N 2 2 .
It is infeasible to compute the arg min and arg max across the whole training set. Therefore, we opt for the so-called online mining approach, which involves selecting the hard positive/negative samples from within a mini-batch. Instead of picking the hardest negatives that can, in practice, lead to bad local minima early on in training; specifically, it can result in a collapsed model (i.e., f x = 0 ), we select N such that:
f A f P 2 2 < f A f N 2 2
These negatives are called semi-hard, as they are further away from the anchor than the positive, but still hard because the squared distance is close to the anchor-positive distance. Those negatives lie inside the margin α .

3.3.2. Circle Loss

Most popular loss functions, including triplet loss, use a similar optimization strategy to maximize interclass similarity ( s p ) and minimize intraclass similarity ( s n ). They all enclose s n and s p into similarity pairs and seek to reduce s n s p , by restricting the penalty strength on every single similarity score to be equal.
Adversely, the circle loss function [40] is built on the idea that different similarity scores should incur different penalties and that similarity scores, which deviate significantly from the optimum, should be emphasized. This approach extends the difference between negative and positive scores, represented as s n s p , into a more nuanced formulation: α n s n α p s p . Here, α n and α p serve as independent weighting factors, enabling s n and s p to learn at different rates. The result of this optimization is a decision boundary shaped like a circle, which is reflected in the loss function’s name.
Given a feature embedding f x , let’s assume that there are K interclass similarity scores and L intraclass similarity scores associated with it. To minimize each s n j , j { 1 , 2 , . . . , L } , as well as to maximize s p i , i { 1 , 2 , . . . , K } , the loss function can be formulated as follows:
L c i r c = log 1 + i = 1 K j = 1 L e γ s n j s p i + m = log 1 + j = 1 L e γ s n j + m i = 1 K e γ s p i
in which γ is a scale factor and m is a margin for better similarity separation. When γ + inf , (9) degenerates to triplet loss with hard mining:
L t r i = lim γ 0 1 γ L c i r c = lim γ 0 1 γ log 1 + i = 1 K j = 1 L e γ s n j s p i + m = max s n j s p i +
Circle loss enhances the process of deep feature learning by offering greater flexibility in optimization and a clearer convergence target.

3.4. Base Encoders

3.4.1. GoogLeNet

GoogLeNet, also known as Inception-v1, is a 22-layer deep network structure, proposed by the Google team in 2014 [41], as a response to the limitations of previous CNN architectures, particularly in terms of depth and computational efficiency. In comparison with e.g., AlexNet [42], it uses 12 × less parameters, therefore it works faster and also provides higher accuracy.
GoogLeNet’s main innovative trait is nine inception modules, which form the building blocks of the architecture. An inception module consists of multiple convolution layers, with different kernel sizes ranging from 1 × 1 to 5 × 5 , and a 3 × 3 max pooling operation block. Those modules receive data from the previous layers, then apply arbitrary parallel operations on the same data. To make GoogLeNet even more efficient, the authors introduced another clever concept — bottleneck layers. These layers, embedded inside inception modules, use 1 × 1 convolutions to perform dimensionality reduction before applying larger convolutional kernels. This not only significantly reduces computational load but also acts as a regularizer, preventing over-fitting.
Each branch of a single inception module calculates different features based on the data passed from the previous layer. All the outputs are concatenated at the end of these parallel operations as input for the next layers of the network. This allows the model to take advantage of multilevel feature extraction and to cover larger image areas, while keeping a fine resolution for small details in the images. At the end of the network, the obtained feature map is compressed into a one-dimensional vector and finally classified by using the softmax function.
GoogLeNet has had a substantial impact on the field of deep learning. The architecture established a new benchmark for classification and detection during the ImageNet Large-Scale Visual Recognition Challenge in 2014, while also laying the groundwork for subsequent innovations in neural network design.

3.4.2. ResNet-50

In order to tackle the challenges of gradient disappearance and gradient explosion that may occur during deep convolutional network training, He et al. [43] proposed a novel network architecture called residual network (ResNet). A significant innovation in this architecture is the use of skip connections, also known as shortcut connections, which facilitate the learning of residual functions that map inputs to desired outputs. The shortcut connections bypass one or more layers, connecting earlier layers directly to those appearing further along in the network. This design preserves essential information from the initial layers, effectively alleviating the vanishing gradient problem during backpropagation.
The ResNet-50 comprises 50 layers, including 3 convolutional layers and 4 residual blocks. The convolutional layers are equipped with batch normalization, ReLU activation, and max pooling operations, which work together to extract features from the input image. These features are then processed by the residual blocks, the fundamental elements of ResNet-50. Each residual block features an identity block that processes the input through a sequence of convolutional layers, before adding the input back to the output; and the convolutional block, which is similar to the identity block, but includes a 1 × 1 convolutional layer to reduce the number of filters prior to the 3 × 3 convolutional layer.
Finally, the architecture ends with fully connected layers, which play a crucial role in classifying the output. The output from the last fully connected layer is passed through a softmax activation function, generating the final class probabilities. ResNet-50 has been pre-trained on extensive datasets, such as ImageNet, and it consistently yields state-of-the-art performance across various benchmarks.

3.4.3. MobileNetV3

MobileNetV3 is the latest and most efficient generation of the popular deep learning model for smartphone applications and edge computing, released by the Google team in 2019 [44]. It has two releases: large and small. The large version possesses a slightly greater number of parameters and is computationally more demanding, which enables it to learn more complex patterns and higher-level abstract features. Consequently, the large model has been chosen as the base encoder in this study.
Building upon the innovations of its predecessors [45], MobileNetV3 has been further refined through the integration of two advanced techniques: MNasNet, which optimizes configuration selection, and the NetAdapt algorithm, aimed at discovering and fine-tuning network architecture. In the initial stage, MobileNetV3 implements a groundbreaking activation function known as hard swish (h-swish), which enhances operational speed and reduces computational overhead compared to the modified sigmoid function introduced by [46]. The model’s core building block is so-called inverted residual block, which combines a depthwise separable convolution block with a squeeze-and-excitation (SE) channel attention block. This design draws inspiration from bottleneck layers, utilizing an inverted residual connection to link input and output features within the same channels, thereby improving feature representation while minimizing memory usage. In contrast to traditional convolution blocks, depthwise separable convolutions apply a depthwise kernel to each channel, followed by a 1 × 1 pointwise convolutional kernel with an accompanying batch normalization layer. The SE block enhances the network’s capability to focus on significant feature channels, thereby bolstering its representational strength. This process involves computing the mean of the input features through an average pooling layer, applying the ReLU or h-swish activation functions to derive feature weights, and then multiplying these weights with the original feature matrix to yield weighted output features. This technique significantly strengthens MobileNetV3’s learning efficacy by amalgamating feature channels.
In the final stage, the average pooling layer is advanced, and the convolutional layer adjusts the number of feature channels to expand into a higher-dimensional space. This structural design improves computational speed and performance without affecting accuracy. With its lightweight architecture and impressive performance, MobileNetV3 has become a mainstay in both academic and industrial applications.

4. Experimental Setup

In this study, we systematically evaluate the effects of using different n-shot variants and loss functions separately for the CBIS-DDSM and INbreast datasets. These datasets provided a benchmark to assess the reliability and efficiency of the proposed n-shot meta-learning approach in the context of breast cancer detection. By utilizing the same training and testing settings, we were able to conduct a direct performance comparison between the CBIS-DDSM and INbreast datasets, thus ensuring a comprehensive analysis of our method.

4.1. Data Preparation

We deliberately avoid extensive pre-processing of mammograms in our datasets, in an effort to improve the proposed Siamese network model’s generalization ability. This approach should consequently make the model more robust to varying image quality and noises present in the scans, while extracting feature embeddings from the input images. Hence, only a few standard pre-processing techniques were performed to optimize the model training procedure. Firstly, an adaptive histogram equalization was applied to the input mammograms to enhance the image contrast. Given that the average size of mammograms in CBIS-DDSM and INbreast datasets is approximately 3000 × 4800 pixels, they all had to be rescaled to a size of 224 × 224 pixels to match the required input image size of the selected network models. In addition, image pixel values were converted from 0 , 255 to 0 , 1 and subsequently standardized to ensure all values in the data have zero mean and unit variance. This is achieved by subtracting the mean and dividing by the standard deviation of the image pixel values in the particular dataset. Standardization serves as a crucial pre-processing step in data preparation, expediting model convergence by removing biases among features and ensuring a uniform distribution throughout the dataset.
To revoke any bias or overfitting that may result in ambiguous classification accuracies, we ensured that the number of mammograms was balanced out in all three categories for both datasets. Stratified random sampling was used to divided each dataset to three different subsets for training, validation, and testing with a split ratio of roughly 70 % : 15 % : 15 % , respectively. Table 1 provides an overview of the data distributions of the normal, benign, and malignant classes for a particular dataset. Samples of the mammograms from the CBIS-DDSM and INbreast datasets are given in Figure 2.

4.2. Training Strategy

We train our model in an episodic fashion where each episode contains multiple training tasks. For each training task, a mini-batch of triplets with n-shot anchors from each of the 3 classes—thus totaling 3 × n -shot examples—is created. During training, the loss function assesses performance for each task in turn, given the respective batch of triplets. At the end of each episode, the model parameters are refined through backpropagation based on the computed loss to optimize the performance. This iterative process allows the model to learn a metric embedding by gaining experience across a series of training tasks. For validation, we use a completely different set of tasks and evaluate performance on the mini-batches of triplets sampled from the validation subset. Note that there is no overlap between examples in the training and validation subsets, so the algorithm must build a distance metric in general, rather than on a particular image subset. Figure 3 shows an example episode training scenario, where we create tasks, each of which is defined by a mini-batch of triplets containing sample images from the meta-training dataset. A detailed training strategy is presented in Algorithm 1.
Algorithm 1 Few-shot learning training strategy
  • Input: Dataset D
  • Output: Trained model
  • Initialize base encoder network f
  • Define parameters: number of epochs (epizodes) n _ e p o c h s , number of training tasks per episode n _ t a s k s , number of anchors n _ s h o t , loss function L
  • for each epoch in 1 , , n _ e p o c h s  do
  •     for each training task in 1 , , n _ t a s k s  do
  •         Create a mini-batch of N = 3 × n _ s h o t triplets { a i , p i , n i } i = 1 N
  •         for each triplet a , p , n in mini-batch { a i , p i , n i } i = 1 N  do
  •            Compute embeddings x a = f a , x p = f p , x n = f n
  •            Compute similarity scores (distances) d p = x a , x p 2 2 , d n = x a , x n 2 2
  •            Calculate L i according to the selected loss function definition
  •         end for
  •         Calculate total loss L (e.g., average L i across all triplets)
  •         Update model parameters using backpropagation
  •     end for
  •     if  early stopping  then
  •         break
  •     end if
  • end for

4.3. Implementation Details

The base encoder networks and the proposed Siamese network model were implemented using PyTorch 2.6.0 with CUDA 12.4 All experiments were run in a cloud-based Google Colab notebook environment, which provides free access to computing resources, including GPUs and TPUs. Google Colab currently features an NVIDIA Tesla T4 GPU that contains 40 streaming multiprocessors with a 6 MB L2 cache shared by all. It also has 16 GB high-bandwidth RAM (GDDR6) and comes with pre-installed Python 3.x packages. Using this environment made it possible to complete all experimental runs for a full iteration cycle, within an average of 3 min per single test.
To compensate for the lack of a large annotated medical dataset, we utilized a transfer learning approach and we initialized trainable parameters of base encoder networks, using weights learned on the ImageNet dataset [47]. Additionally, the pre-trained base encoders were fine-tuned to adapt them to the specific task. For this purpose, we discarded the last two layers of the pre-trained models and replaced them with two newly initialized layers that we trained from scratch, thus enabling them to capture the unique characteristics of selected datasets, while retaining the powerful feature extraction capabilities of the pre-trained models. Embeddings (feature vectors) were generated by stripping the last fully-connected layer of the base encoder network.
To train the proposed Siamese network model, we employed a Stochastic Gradient Descent (SGD) optimizer, with an initial learning rate, momentum, and weight decay set to 1 × 10 5 , 0.9 , and 5 × 10 4 , respectively. Furthermore, we used the MultiStepLR scheduler with γ = 0.1 , because this value was deemed the best after hyperparameter tuning. The model was trained for 100 epochs (epizodes) per experiment, and an early stopping procedure was implemented to bring the training process to a halt if the monitored performance measure does not show any improvement for a “patience” number of epochs.

4.4. Evaluation Metrics

The proposed Siamese network model is rigorously scrutinized using five key performance metrics: accuracy, precision, recall, specificity, and F1-score. Here, accuracy is defined as the proportion of correct predictions relative to the total number of instances assessed. Precision, often referred to as positive predictive value (PPV), represents the ratio of true positive cases over all predictions made within the positive class. Recall or sensitivity—also called true positive rate (TPR)—which is particularly important in medical applications, refers to the percentage of actual positive cases that are accurately identified. On the other hand, specificity, commonly known as the true negative rate (TNR), is the percentage of true negative instances that are correctly classified. Last but not least, F1-score denotes the harmonic mean between precision and recall, calculated by taking their weighted average that captures both metrics’ balance effectively.
Also, the receiver operating characteristic (ROC) curve, which visualizes the trade-off between sensitivity and specificity at various classification thresholds, with its area under the curve (AUC) score, is utilized. The AUC score reflects how good the model is in distinguishing between classes and was proven, both theoretically and empirically, to be more suitable than the accuracy metric [48] for evaluating classification performance. The AUC score can be computed as follows:
A U C = S p n p n n + 1 / 2 n p n n
where, S p is the sum of all positive instances ranked, while n p and n n denote the total number of positive and negative instances, respectively.

5. Results and Discussion

To assess the efficacy of the proposed framework for diagnosing breast cancer cases, the following approach was adopted. Since our framework is based on a few-shot learning paradigm, first we examine the effect that the number of shots has on the overall performance, and to this end we conduct multiple experiments in various 3-way, n-shot learning settings—where n varies between 3 and 7—with both triplet and circle losses. In order to get even better insight, we subsequently scrutinize the efficiency of the proposed framework in distinguishing between tumorous and normal mammograms (2-class problem variant) with similar n-shot learning settings. Finally, for comparison purposes, we present the results obtained by using the three selected, fine-tuned, pre-trained base encoder networks as standalone classifiers.
During evaluation, the distances between singled-out test subset instances and all the remaining images are rated against each other. Then, thresholding and the 5-nearest neighbour algorithm are applied to classify the instances into certain categories. In the experiments focusing on multi-type classification, the macro average is employed to derive the final performance metrics. This technique ensures that all categories are attributed with the same weights when calculating the mean. As a result, the final value reflects the arithmetic mean of the individual metrics associated with each category.

5.1. Classification Results for 3-Way Detection Problem

The evaluation results of the proposed Siamese network model are presented independently for each dataset and each loss function. In all experiments, we made sure to employ a consistent architecture of back encoder networks, alongside uniform training and test configurations, to objectively validate the model’s performance for both CBIS-DDSM and INbreast datasets. Table 2 and Table 3 summarize the classification results for all three base encoder networks over various n-shot settings on the INbreast dataset using triplet and circle loss, respectively. It’s evident from the results that generally better AUC scores and other performance metrics values are achieved with a lower number of shots (3–4), and they appear to be declining as the number of shots increases. Only for the MobileNetV3 base encoder with triplet loss, the best performance is recorded in the case when training was done with mini-batches containing 6 triplets. Overall, the performance results obtained with the circle loss function seem to be slightly better than the results yielded with the standard triplet loss function, which is in concurrence with the findings reported in the literature. The highest performance rates in terms of sensitivity ( 90 % ) and specificity ( 94.64 % ) on the INbreast dataset are obtained with the ResNet50 base encoder in a 4-shot setting.
The average performance evaluation results for different loss functions and n-shot settings on the CBIS-DDSM dataset are presented in Table 4 and Table 5. Here, the performance results across all experiments are far more homogeneous—showing small variations among themselves based on loss function and number of shots used for training—and are, for the most part, higher than those on the INbreast dataset. On the CBIS-DDSM dataset, GoogLeNet, ResNet50, and MobileNetV3 yield an average AUC score of 0.9281 , 0.9088 , and 0.9092 , respectively, using triplet loss, and an average AUC score of 0.9373 , 0.9311 , and 0.9180 , respectively, for the circle loss function. In addition, the average sensitivities of 84.98 % , 85.99 % , and 84.98 % , respectively, are achieved for GoogLeNet, ResNet50, and MobileNetV3 base encoders with triplet loss. Using circle loss, the average recorded sensitivities for the GoogLeNet, ResNet50, and MobileNetV3 base encoders are 85.15 % , 87.68 % , and 85.908 % , respectively. Again, the averaged performance results over all experiments for different base encoder networks in combination with circle loss consistently narrowly surpass those obtained through the use of the standard triplet loss function, in terms of all evaluation metrics. Figure 4 and Figure 5 show corresponding ROC curves for all the 3-way experiments carried out on both INBreast and CBIS-DDSM datasets.

5.2. Classification Results for 2-Way Detection Problem

Table 6, Table 7, Table 8 and Table 9 show the performance results with respective loss functions and various n-shot settings in classifying healthy and breast cancer patients on INbreast and CBIS-DDSM datasets. As expected, the model generally shows improved performance for diagnosing breast cancer patients on both datasets in this simplified 2-class scenario.
In Table 6 and Table 7, we observe the results from the INbreast dataset, where normal mammograms are accurately classified with average specificity of 92.00 % , 76.00 % , and 68.00 % for the GoogLeNet, ResNet50, and MobileNetV3, with triplet loss, and 82.00 % , 80.00 % , and 68.00 % , with circle loss, respectively. For tumorous cases, the average sensitivities are commendably high, standing at 88.00 % and 91.00 % for GoogLeNet and 92.00 % flat for both ResNet50 and MobileNetV3, with triplet and circle loss, respectively. Notably, GoogLeNet combined with circle loss in a 4-shot setting, demonstrates near perfect performance with the highest AUC score of 0.9900 . These results clearly illustrate that the proposed Siamese network model, particularly when using GoogLeNet as a back encoder, is able to effectively rule-out the presence of a disease in a patient.
When evaluating the CBIS-DDSM dataset, the classification of normal cases yielded average specificities exceeding those in 3-way experiments, namely 95.95 % and 94.18 % for GoogLeNet, 93.16 % and 96.20 % for ResNet50, and 94.18 % and 94.68 % for MobileNetV3, with triplet and circle loss, respectively. The average sensitivities for tumorous cases also reflect strong outcomes, with 93.92 % , 94.81 % , and 96.97 % for GoogLeNet, ResNet50, and MobileNetV3 with triplet loss, and 93.80 % , 95.06 % , and 95.57 % with circle loss, respectively. Furthermore, GoogLeNet paired with circle loss achieved two top AUC scores of 0.9854 and 0.9843 . These findings underscore the effectiveness of the proposed Siamese network model in distinguishing between healthy and cancer patients across the datasets.
Corresponding ROC curves for all 2-way experiments conducted on the INBreast and CBIS-DDSM datasets are illustrated in Figure 6 and Figure 7.

5.3. Siamese Network Model vs. Standard Pre-Trained CNN Classifiers

The performance results attained by utilizing the three selected, fine-tuned, pre-trained base encoder networks as standalone classifiers in 2-way and 3-way scenarios are presented in Table 10 and Table 11. By comparing these results to those achieved by the proposed Siamese network model, it can be concluded that the latter consistently outperforms the pre-trained CNN models in every scenario and across all evaluation metrics for both the INbreast and CBIS-DDSM datasets. There could be many reasons for this, but the main point is that there is not nearly enough mammogram data available to efficiently train deep neural networks. The somewhat satisfactory performance results the three networks achieved can mostly be attributed to the extensive pre-training they underwent using the ImageNet dataset. On the other hand, the Siamese-based model exploits the benefits of being given triplets of images, where it learns to distinguish a similar image from different ones, and exhibits stronger generalization capabilities. Specifically, on the INbreast dataset for a binary classification task, GoogLeNet, ResNet-50, and MobileNetV3, paired with triplet loss, have improved their performance in terms of AUC scores by an average 0.1350 , 0.3130 , and 0.1705 , respectively. For the circle loss, the corresponding average AUC score improvements stand at 0.0810 , 0.3130 , and 0.1655 , in a 2-way scenario. Similarly, for a 3-class problem, the average recorded boosts in AUC score amount to 0.3918 and 0.4404 for GoogLeNet, 0.5271 and 0.3828 for ResNet50, and 0.4791 and 0.4325 for MobileNetV3, with triplet and circle loss, respectively. In the context of the CBIS-DDSM dataset, considering a 2-way scenario, the three selected pre-trained back encoders have enhanced their AUC scores by an average of 0.3002 , 0.3514 , and 0.1103 , in case of triplet loss, and an average of 0.3128 , 0.3536 , and 0.1169 , in case of circle loss. In a 3-class setting, the average AUC score enhancements for GoogLeNet are approximately 0.4534 and 0.4626 , while ResNet50 shows improvements of around 0.4542 and 0.4765 . Meanwhile, MobileNetV3 demonstrates increases of 0.4119 and 0.4207 when utilizing triplet and circle loss, respectively.
These achievements are particularly significant, knowing that the Siamese model is trained with only a (very) limited number of training examples from each category of images. Furthermore, the proposed model is relatively compact, featuring fewer trainable parameters, which highlights the advantages of employing a few-shot learning approach for diagnosing breast cancer patients compared to traditional, supervised learning techniques.

5.4. Statistical Inference Study

To assess the means and standard deviations of TPR values and AUC scores, we conducted a statistical analysis using the Kruskal–Wallis test. This non-parametric method is particularly effective for comparing two or more independent samples, regardless of whether they are of equal or differing sizes. A significant result from the Kruskal–Wallis test implies that at least one sample exhibits a stochastic dominance over the others. We utilized this approach to investigate whether there are significant statistical differences—at a 95 % confidence level—in the outcomes yielded by the proposed Siamese network model, depending on the number of shots or base encoders used, specifically for the INbreast and CBIS-DDSM datasets in 2-way and 3-way scenarios.
The computed p values, presented in Table 12 and Table 13, indicate that even though the results differ insignificantly p > 0.5 based on number of training triplets (n-shot), in some cases significant difference are recorded, in both TPR values and AUC scores, between the three selected base encoder networks for the INbreast and CBIS-DDSM datasets. While the p value from the Kruskal–Wallis test confirms the existence of a statistically significant difference, the Mann–Whitney U tests are further employed to identify which specific CNN model(s) produce significant results. This analysis is also conducted at a 95 % confidence level, with the findings detailed in Table 14 and Table 15. For a binary classification task with a triplet loss, MobileNetV3 exhibits a significantly inferior average AUC score compared to the best-performing base encoder, GoogLeNet, on the INbreast dataset. When analyzing the same scenario in the context of the CBIS-DDSM dataset, we find that all three base encoders differ significantly in their performance in terms of TPR values; whereas, when circle loss is utillized, GoogLeNet appears to have an edge in terms of TPR values and AUC scores over MobileNetV3 and ResNet50, respectively. Similar conclusions can be drawn for a 3-class problem using triplet loss.

6. Conclusions and Future Research

Breast cancer classification falls within a data-scarce domain, where acquiring an adequate number of training samples for effectively training a conventional neural network is often impractical. Endeavoring to circumvent the problem of insufficient data, this paper showcases a few-shot learning approach to breast cancer detection, which is evaluated on two digital X-ray mammogram datasets. In particular, we test out three different pre-trained, finely-tuned CNN encoders for their ability to effectively capture unbiased feature representations resilient against overfitting, and we adapt a Siamese network architecture to perform the final classification of normal and tumorous cases. Our findings indicate that the proposed model, utilizing triplet-based loss, delivers a highly accurate and practical mechanism for the automated diagnosis of breast cancer in various n-shot learning settings. Moreover, our model exhibits performance that is not only comparable to but actually exceeds that of the fine-tuned pre-trained CNN models, employed as standalone classifiers, by a handsome margin. These results may encourage health professionals to leverage the model in the early detection of breast cancer, thereby easing their own workload and enabling them to allocate more time to crafting personalized treatment plans for their patients.
Moving forward, we intend to extend our work by performing data augmentation to artificially increase the size and diversity of training data in an attempt to further improve the performance and generalization of the proposed model. In addition, more specific and meaningful representations could potentially be derived through the exertion of multi-scale feature maps, generated by superimposing features produced by feeding deep learning convolutional models with annotated ROI patches, containing different types of breast lesions, on top of features obtained by feeding the network with the entire breast images. Finally, the presented model can be applied to other common types of cancer detection, such as brain, lung, or prostate cancer.

Author Contributions

Conceptualization, T.M. and V.P.; methodology, T.M.; software, T.M.; validation, T.M. and V.P.; formal analysis, T.M.; investigation, T.M.; data curation, T.M.; writing—original draft preparation, T.M.; writing—review and editing, V.P.; visualization, T.M.; supervision, V.P.; project administration, V.P.; funding acquisition, V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Giaquinto, A.; Sung, H.; Newman, L.; Freedman, R.; Smith, R.; Star, J.; Jemal, A.; Siegel, R. Breast cancer statistics 2024. CA Cancer J. Clin. 2024, 74, 477–495. [Google Scholar] [CrossRef]
  2. Pfeiffer, R.; Webb-Vargas, Y.; Wheeler, W.; Gail, M. Proportion of U.S. trends in breast cancer incidence attributable to long-term changes in risk factor distributions. Cancer Epidemiol. Biomarkers Prev. 2018, 27, 1214–1222. [Google Scholar] [CrossRef] [PubMed]
  3. Mahmood, T.; Li, J.; Pei, Y.; Akhtar, F.; Imran, A.; Rehman, K. A brief survey on breast cancer diagnostic with deep learning schemes using multi-image modalities. IEEE Access 2020, 8, 165779–165809. [Google Scholar] [CrossRef]
  4. Elter, M.; Horsch, A. CADx of mammographic masses and clustered microcalcifications: A review. Med. Phys. 2009, 36, 2052–2068. [Google Scholar] [CrossRef] [PubMed]
  5. Lehman, C.; Wellman, R.; Buist, D.; Kerlikowske, K.; Tosteson, A.; Miglioretti, D. Diagnostic accuracy of digital screening mammography with and without computer-aided detection. JAMA Intern. Med. 2015, 175, 1828–1837. [Google Scholar] [CrossRef]
  6. Bhowmik, A.; Eskreis-Winkler, S. Deep learning in breast imaging. BJR Open 2022, 4, 1–12. [Google Scholar] [CrossRef] [PubMed]
  7. Oladimeji, O.O.; Imran, A.A.Z.; Wang, X.; Unnikrishnan, S. Deep learning advances in breast medical imaging with a focus on clinical readiness and radiologists’ perspective. Image Vis. 2025, 161, 105601. [Google Scholar] [CrossRef]
  8. Taye, M. Understanding of machine learning with deep learning: Architectures, workflow, applications and future directions. Computers 2023, 12, 91. [Google Scholar] [CrossRef]
  9. Shen, L.; Margolies, L.; Rothstein, J.; Fluder, E.; McBride, R.; Sieh, W. Deep learning to improve breast cancer detection on screening mammography. Sci. Rep. 2019, 9, 12495. [Google Scholar] [CrossRef]
  10. Aboutalib, S.; Mohamed, A.; Berg, W.; Zuley, M.; Sumkin, J.; Wu, S. Deep learning to distinguish recalled but benign mammography images in breast cancer screening. Clin. Cancer Res. 2018, 24, 5902–5909. [Google Scholar] [CrossRef]
  11. Kim, E.-K.; Kim, H.-E.; Han, K.; Kang, B.J.; Sohn, Y.-M.; Woo, O.H.; ILee, C.W. Applying data-driven imaging biomarker in mammography for breast cancer screening: Preliminary study. Sci. Rep. 2018, 8, 2762. [Google Scholar] [CrossRef]
  12. Li, F.; Fergus, R.; Perona, P. One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 594–611. [Google Scholar] [CrossRef]
  13. Vinyals, O.; Blundell, C.; Lillicrap, T.; Kavukcuoglu, K.; Wierstra, D. Matching networks for one shot learning. Adv. Neural Inf. Process. Syst. 2016, 29, 3630–3638. [Google Scholar] [CrossRef]
  14. Nemade, V.; Pathak, S.; Dubey, A.K. A systematic literature review of breast cancer diagnosis using machine intelligence techniques. Arch. Comput. Methods Eng. 2022, 29, 4401–4430. [Google Scholar] [CrossRef]
  15. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  16. Al-antari, M.; Han, S.-M.; Kim, T.-S. Evaluation of deep learning detection and classification towards a computer-aided diagnosis of breast lesions in digital X-ray mammograms. Comput. Methods Programs Biomed. 2020, 196, 105584. [Google Scholar] [CrossRef]
  17. Khamparia, A.; Bharati, S.; Podder, P.; Gupta, D.; Khanna, A.; Phung, T.K.; Thanh, D. Diagnosis of breast cancer based on modern mammography using hybrid transfer learning. Multidimens. Syst. Signal Process. 2021, 32, 747–765. [Google Scholar] [CrossRef] [PubMed]
  18. Ragab, D.; Sharkas, M.; Marshall, S.; Ren, J. Breast cancer detection using deep convolutional neural networks and support vector machines. PeerJ 2019, 7, e6201. [Google Scholar] [CrossRef] [PubMed]
  19. Adedigba, A.; Adeshina, S.; Aibinu, A. Performance evaluation of deep learning models on mammogram classification using small dataset. Bioengineering 2022, 9, 161. [Google Scholar] [CrossRef] [PubMed]
  20. Samala, R.; Chan, H.-P.; Hadjiiski, L.; Helvie, M.; Richter, C.; Cha, K. Breast cancer diagnosis in digital breast tomosynthesis: Effects of training sample size on multi-stage transfer learning using deep neural nets. IEEE Trans. Med. Imaging 2019, 38, 686–696. [Google Scholar] [CrossRef]
  21. Ribli, D.; Horvath, A.; Unger, Z.; Pollner, P.; Csabai, I. Detecting and classifying lesions in mammograms with deep learning. Sci. Rep. 2018, 8, 4165. [Google Scholar] [CrossRef]
  22. Al-antari, M.; Al-masni, M.; Choi, M.-T.; Han, S.-M.; Kim, T.-S. A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification. Int. J. Med. Inform. 2018, 117, 44–54. [Google Scholar] [CrossRef] [PubMed]
  23. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 1–9. [Google Scholar] [CrossRef]
  24. Shams, S.; Platania, R.; Zhang, J.; Kim, J.; Lee, K.; Park, S.-J. Deep generative breast cancer screening and diagnosis. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2018), Granada, Spain, 16–20 September 2018; pp. 859–867. [Google Scholar]
  25. Singh, V.K.; Rashwan, H.A.; Romani, S.; Akram, F.; Pandey, N.; Sarker, M.K.; Saleh, A.; Arenas, M.; Arquez, M.; Puig, D.; et al. Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Syst. Appl. 2020, 139, 112855. [Google Scholar] [CrossRef]
  26. Guan, S.; Loew, M. Breast cancer detection using synthetic mammograms from generative adversarial networks in convolutional neural networks. J. Med. Imaging 2019, 6, 031411. [Google Scholar] [CrossRef]
  27. He, K.; Gan, C.; Li, Z.; Rekik, I.; Yin, Z.; Ji, W.; Gao, Y.; Wang, Q.; Zhang, J.; Shen, D. Transformers in medical image analysis. Intell. Med. 2023, 3, 59–78. [Google Scholar] [CrossRef]
  28. Miller, J.; Arasu, V.; Pu, A.; Margolies, L.; Sieh, W.; Shen, L. Self-supervised deep learning to enhance breast cancer detection on screening mammography. arXiv 2022, arXiv:2203.08812. [Google Scholar] [CrossRef]
  29. Matsoukas, C.; Haslum, J.; Soderberg, M.; Smith, K. Is it time to replace CNNs with transformers for medical images? arXiv 2021, arXiv:2108.09038. [Google Scholar] [CrossRef]
  30. Matsoukas, C.; Haslum, J.; Sorkhei, M.; Soderberg, M.; Smith, K. What makes transfer learning work for medical images: Feature reuse & other factors. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 9215–9224. [Google Scholar] [CrossRef]
  31. Betancourt Tarifa, A.; Marrocco, C.; Molinara, M.; Tortorella, F.; Bria, A. Transformer-based mass detection in digital mammograms. J. Ambient Intell. Humaniz. Comput. 2023, 14, 2723–2737. [Google Scholar] [CrossRef]
  32. Cantone, M.; Marrocco, C.; Tortorella, F.; Bria, A. Convolutional networks and transformers for mammography classification: An experimental study. Sens. 2023, 23, 1229. [Google Scholar] [CrossRef]
  33. Li, J.; Chen, J.; Tang, Y.; Wang, C.; Landman, B.; Zhou, S. Transforming medical imaging with transformers? A comparative review of key properties, current progresses, and future perspectives. Med. Image Anal. 2023, 85, 102762. [Google Scholar] [CrossRef] [PubMed]
  34. Nurgazin, M.; Tu, N. A comparative study of vision transformer encoders and few-shot learning for medical image classification. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Paris, France, 2–6 October 2023; pp. 2505–2513. [Google Scholar] [CrossRef]
  35. Sawyer Lee, R.; Gimenez, F.; Hoogi, A.; Kawai Miyake, K.; Gorovoy, M.; Rubin, D. A curated mammography data set for use in computer-aided detection and diagnosis research. Sci. Data 2017, 4, 170177. [Google Scholar] [CrossRef]
  36. Moreira, I.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M.J.; Cardoso, J. INbreast: Toward a full-field digital mammographic database. Acad. Radiol. 2012, 19, 236–248. [Google Scholar] [CrossRef]
  37. D’Orsi, C.; Sickles, E.; Mendelson, E.; Morris, E. ACR BI-RADS Atlas: Breast Imaging and Reporting Data System, 5th ed.; American College of Radiology: Reston, VA, USA, 2013; ISBN 978-15-5903-016-8. [Google Scholar]
  38. Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar] [CrossRef]
  39. Hoffer, E.; Ailon, N. Deep metric learning using Triplet network. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  40. Sun, Y.; Cheng, C.; Zhang, Y.; Zhang, C.; Zheng, L.; Wang, Z.; Wei, Z. Circle loss: A unified perspective of pair similarity optimization. In Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar] [CrossRef]
  41. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef]
  42. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar] [CrossRef]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
  44. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar] [CrossRef]
  45. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
  46. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for activation functions. arXiv 2017, arXiv:1710.05941. [Google Scholar] [CrossRef]
  47. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE conference on computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
  48. Huang, J.; Ling, C.X. Using AUC and accuracy in evaluating learning algorithms. IEEE Trans. Knowl. Data Eng. 2005, 17, 299–310. [Google Scholar] [CrossRef]
Figure 1. The architecture of deep Siamese network for n-shot image classification.
Figure 1. The architecture of deep Siamese network for n-shot image classification.
Algorithms 18 00567 g001
Figure 2. Sample images from INbreast and CBIS-DDSM datasets.
Figure 2. Sample images from INbreast and CBIS-DDSM datasets.
Algorithms 18 00567 g002
Figure 3. An example training episode for 2-shot, 3-class image classification task. For illustration purposes, in the figure mammograms are substituted with animal images to facilitate comprehension.
Figure 3. An example training episode for 2-shot, 3-class image classification task. For illustration purposes, in the figure mammograms are substituted with animal images to facilitate comprehension.
Algorithms 18 00567 g003
Figure 4. ROC curves for various 3-way, n-shot settings with (left side) triplet loss and (right side) circle loss on INbreast dataset. t* denotes optimal threshold value that best balances the trade-off between sensitivity and ( 1   specificity).
Figure 4. ROC curves for various 3-way, n-shot settings with (left side) triplet loss and (right side) circle loss on INbreast dataset. t* denotes optimal threshold value that best balances the trade-off between sensitivity and ( 1   specificity).
Algorithms 18 00567 g004
Figure 5. ROC curves for various 3-way, n-shot settings with (left side) triplet loss and (right side) circle loss on CBIS-DDSM dataset. t* denotes optimal threshold value that best balances the trade-off between sensitivity and ( 1   specificity).
Figure 5. ROC curves for various 3-way, n-shot settings with (left side) triplet loss and (right side) circle loss on CBIS-DDSM dataset. t* denotes optimal threshold value that best balances the trade-off between sensitivity and ( 1   specificity).
Algorithms 18 00567 g005
Figure 6. ROC curves for various 2-way, n-shot settings with (left side) triplet loss and (right side) circle loss on INbreast dataset. t* denotes optimal threshold value that best balances the trade-off between sensitivity and ( 1   specificity).
Figure 6. ROC curves for various 2-way, n-shot settings with (left side) triplet loss and (right side) circle loss on INbreast dataset. t* denotes optimal threshold value that best balances the trade-off between sensitivity and ( 1   specificity).
Algorithms 18 00567 g006
Figure 7. ROC curves for various 2-way, n-shot settings with (left side) triplet loss and (right side) circle loss on CBIS-DDSM dataset. t* denotes optimal threshold value that best balances the trade-off between sensitivity and ( 1   specificity).
Figure 7. ROC curves for various 2-way, n-shot settings with (left side) triplet loss and (right side) circle loss on CBIS-DDSM dataset. t* denotes optimal threshold value that best balances the trade-off between sensitivity and ( 1   specificity).
Algorithms 18 00567 g007
Table 1. Data distribution over training, validation, and testing subsets for CBIS-DDSM and INbreast datasets.
Table 1. Data distribution over training, validation, and testing subsets for CBIS-DDSM and INbreast datasets.
ClassesCBIS-DDSMINbreast
TrainingValidationTestingTrainingValidationTesting
Normal2734879471010
Benign2734879471010
Malignant2734879471010
Total8191442371413030
Table 2. Performance results for various 3-way, n-shot settings with triplet loss on INbreast dataset.
Table 2. Performance results for various 3-way, n-shot settings with triplet loss on INbreast dataset.
EncoderN-ShotAccuracyPPVTPRTNRF1-ScoreAUC
GoogLeNet30.86670.86600.86670.93430.86100.8364
40.83330.87120.83330.90290.80640.9097
50.83330.83930.83330.92210.83490.7613
60.80000.79030.80000.88880.79070.7181
70.76670.76670.76670.87870.76070.7828
ResNet5030.83330.84920.83330.92020.83260.8921
40.83330.83600.83330.89890.83340.8800
50.73330.73740.73330.84330.73200.8851
60.80000.80260.80000.88880.80010.8538
70.66670.66540.66670.81910.64520.8225
MobileNetV330.63330.48210.63330.74730.54710.8542
40.63330.49650.63330.72210.54900.8253
50.66670.50840.66670.78060.57650.8878
60.83330.85750.83330.89700.82870.9111
70.83330.85740.83330.87580.81420.8345
Table 3. Performance results for various 3-way, n-shot settings with circle loss on INbreast dataset.
Table 3. Performance results for various 3-way, n-shot settings with circle loss on INbreast dataset.
EncoderN-ShotAccuracyPPVTPRTNRF1-ScoreAUC
GoogLeNet30.80000.80000.80000.88880.80000.8041
40.86670.86610.86670.91110.86440.9008
50.80000.80260.80000.88880.80010.7652
60.80000.80070.80000.86560.79880.8656
70.83330.85500.83330.92410.83290.9157
ResNet5030.80000.80030.80000.86560.79450.7890
40.90000.90600.90000.94640.90160.8251
50.73330.73510.73330.84330.73030.7022
60.73330.77750.73330.88770.72200.6316
70.90000.90420.90000.92120.90030.6642
MobileNetV330.83330.83520.83330.89890.83240.7254
40.90000.91330.90000.94240.89810.8347
50.73330.71760.73330.84530.71610.8155
60.73330.72680.73330.82210.72530.8312
70.73330.72840.73330.82210.72760.8731
Table 4. Performance results for various 3-way, n-shot settings with triplet loss on CBIS-DDSM dataset.
Table 4. Performance results for various 3-way, n-shot settings with triplet loss on CBIS-DDSM dataset.
EncoderN-ShotAccuracyPPVTPRTNRF1-ScoreAUC
GoogLeNet30.85650.85480.85650.92830.85400.9335
40.84390.84090.84390.92190.84070.9198
50.83970.83870.83970.91980.83770.9279
60.85650.85540.85650.92830.85550.9346
70.85230.85200.85230.92620.85220.9245
ResNet5030.87760.87790.87760.93880.87630.9001
40.84810.85050.84810.92410.84610.9132
50.86080.86400.86080.93040.85870.9174
60.86080.86170.86080.93040.86030.9093
70.85230.85490.85230.92620.84950.9038
MobileNetV330.88190.88220.88190.94090.88200.9107
40.89870.89940.89870.94940.89900.9160
50.88610.88560.88610.94300.88560.9018
60.87760.87840.87760.93880.87790.9024
70.88190.88230.88190.94090.88060.9153
Table 5. Performance results for various 3-way, n-shot settings with circle loss on CBIS-DDSM dataset.
Table 5. Performance results for various 3-way, n-shot settings with circle loss on CBIS-DDSM dataset.
EncoderN-ShotAccuracyPPVTPRTNRF1-ScoreAUC
GoogLeNet30.83540.83460.83540.91770.83480.9343
40.86920.86800.86920.93460.86670.9320
50.85650.85590.85650.92830.85570.9401
60.84810.84980.84810.92410.84720.9428
70.84810.84950.84810.92410.84740.9371
ResNet5030.86500.86830.86500.93250.86390.9270
40.87340.87260.87340.93670.87240.9271
50.86500.86670.86500.93250.86210.9206
60.89870.90200.89870.94940.89650.9450
70.88190.88160.88190.94090.88130.9358
MobileNetV330.87760.88240.87760.93880.87370.9363
40.87340.87380.87340.93670.87170.9265
50.85650.85630.85650.92830.85570.9124
60.85230.86430.85230.92620.85230.9089
70.83540.84710.83540.91770.83220.9061
Table 6. Performance results for various 2-way, n-shot settings with triplet loss on INbreast dataset.
Table 6. Performance results for various 2-way, n-shot settings with triplet loss on INbreast dataset.
EncoderN-ShotAccuracyPPVTPRTNRF1-ScoreAUC
GoogLeNet30.90000.94740.90000.90000.92310.9225
40.90001.00000.85001.00000.91890.9550
50.90000.94740.90000.90000.92310.9200
60.90000.94740.90000.90000.92310.9250
70.86670.94440.85000.90000.89470.9025
ResNet5030.96671.00000.95001.00000.97440.9850
40.93330.95000.95000.90000.95000.9175
50.90000.94740.90000.90000.92310.9150
60.83330.85710.90000.70000.87800.7600
70.70000.72000.90000.30000.80000.6250
MobileNetV330.86670.94440.85000.90000.89470.8325
40.80000.81820.90000.60000.85710.7025
50.86670.94440.85000.90000.89470.8725
60.86670.83331.00000.60000.90910.7100
70.80000.76921.00000.40000.86960.5850
Table 7. Performance results for various 2-way, n-shot settings with circle loss on INbreast dataset.
Table 7. Performance results for various 2-way, n-shot settings with circle loss on INbreast dataset.
EncoderN-ShotAccuracyPPVTPRTNRF1-ScoreAUC
GoogLeNet30.83330.85710.90000.70000.87800.7475
40.96671.00000.95001.00000.97440.9900
50.83330.85710.90000.70000.87800.8775
60.86670.86360.95000.70000.90480.8100
70.90001.00000.85001.00000.91890.9300
ResNet5030.83330.82610.95000.60000.88370.7450
40.90000.94740.90000.90000.92310.8875
50.80000.81820.90000.60000.85710.7125
60.90001.00000.85001.00000.91890.9475
70.96670.95241.00000.90000.97560.9225
MobileNetV330.86670.86360.95000.70000.90480.7200
40.90000.86961.00000.70000.93020.7375
50.80000.85000.85000.70000.85000.6700
60.83330.85710.90000.70000.87800.7850
70.80000.81820.90000.60000.85710.7650
Table 8. Performance results for various 2-way, n-shot settings with triplet loss on CBIS-DDSM dataset.
Table 8. Performance results for various 2-way, n-shot settings with triplet loss on CBIS-DDSM dataset.
EncoderN-ShotAccuracyPPVTPRTNRF1-ScoreAUC
GoogLeNet30.95780.99330.94300.98730.96750.9700
40.95781.00000.93671.00000.96730.9599
50.94090.98000.93040.96200.95450.9675
60.94510.97390.94300.94940.95820.9641
70.92830.94900.94300.89870.94600.9680
ResNet5030.95360.98040.94940.96200.96460.9539
40.93670.96130.94300.92410.95210.9788
50.93670.95540.94940.91140.95240.9628
60.93670.94970.95570.89870.95270.9637
70.94940.98030.94300.96200.96130.9620
MobileNetV330.95360.96230.96840.92410.96530.9641
40.95780.96250.97470.92410.96860.9563
50.96620.98080.96840.96200.97450.9634
60.94940.95630.96840.91140.96230.9819
70.97470.99350.96840.98730.98080.9712
Table 9. Performance results for various 2-way, n-shot settings with circle loss on CBIS-DDSM dataset.
Table 9. Performance results for various 2-way, n-shot settings with circle loss on CBIS-DDSM dataset.
EncoderN-ShotAccuracyPPVTPRTNRF1-ScoreAUC
GoogLeNet30.91980.94840.93040.89870.93930.9854
40.95781.00000.93671.00000.96730.9728
50.94510.97390.94300.94940.95820.9843
60.94090.96750.94300.93670.95510.9760
70.93250.96100.93670.92410.94870.9739
ResNet5030.94090.95570.95570.91140.95570.9672
40.95360.98040.94940.96200.96460.9577
50.95360.99330.93670.98730.96420.9667
60.97051.00000.95571.00000.97730.9738
70.95360.97420.95570.94940.96490.9667
MobileNetV330.96200.99340.94940.98730.97090.9808
40.95360.97420.95570.94940.96490.9742
50.95780.98050.95570.96200.96790.9805
60.94510.95030.96840.89870.95920.9657
70.94510.96770.94940.93670.95850.9690
Table 10. Performance comparison between selected fine-tuned pre-trained CNN models for various k-way settings on INbreast dataset.
Table 10. Performance comparison between selected fine-tuned pre-trained CNN models for various k-way settings on INbreast dataset.
K-WayEncoderAccuracyPPVTPRTNRF1-ScoreAUC
2GoogLeNet0.53331.00000.30001.00000.46150.7900
ResNet500.56670.76920.50000.70000.60610.5300
MobileNetV30.40000.62500.25000.70000.35710.5700
3GoogLeNet0.43330.77100.43330.82560.40890.4099
ResNet500.46670.73830.46670.82980.41140.3396
MobileNetV30.36670.62800.36670.78210.37170.3835
Table 11. Performance comparison between selected fine-tuned pre-trained CNN models for various k-way settings on CBIS-DDSM dataset.
Table 11. Performance comparison between selected fine-tuned pre-trained CNN models for various k-way settings on CBIS-DDSM dataset.
K-WayEncoderAccuracyPPVTPRTNRF1-ScoreAUC
2GoogLeNet0.84810.91780.84810.84810.88160.6657
ResNet500.86080.87430.92410.73420.89850.6128
MobileNetV30.50210.95450.26580.97470.41580.8571
3GoogLeNet0.62870.62200.62870.81430.62310.4747
ResNet500.61600.63210.61600.80800.62180.4546
MobileNetV30.41350.50970.41350.70680.31050.4973
Table 12. p values of Kruskal-Wallis test for various n-shot settings.
Table 12. p values of Kruskal-Wallis test for various n-shot settings.
DatasetK-WayTriplet LossCircle Loss
TPRAUCTPRAUC
INbreast20.83860.33860.45260.1870
INbreast30.93170.58040.06820.3839
CBIS-DDSM20.97660.82660.79460.5690
CBIS-DDSM30.92100.98760.75250.8384
Table 13. p values of Kruskal-Wallis test for various base encoders. Statistically significant differences are denoted as bold.
Table 13. p values of Kruskal-Wallis test for various base encoders. Statistically significant differences are denoted as bold.
DatasetK-WayTriplet LossCircle Loss
TPRAUCTPRAUC
INbreast20.30960.03230.96760.0935
INbreast30.27090.28080.69030.0590
CBIS-DDSM20.00270.54340.01840.0437
CBIS-DDSM30.00630.00920.06640.0539
Table 14. Results of Mann Whitney U test on INbreast dataset.
Table 14. Results of Mann Whitney U test on INbreast dataset.
K-WayMetricLossEncoderEncoder
GoogLeNetResNet50MobileNetV3
2AUCTripletGoogLeNet-0.30950.0079
ResNet500.3095-0.2222
MobileNetV30.00790.2222-
Table 15. Results of Mann Whitney U test on CBIS-DDSM dataset.
Table 15. Results of Mann Whitney U test on CBIS-DDSM dataset.
K-WayMetricLossEncoderEncoder
GoogLeNetResNet50MobileNetV3
2TPRTripletGoogLeNet-0.04420.0088
ResNet500.0442-0.0092
MobileNetV30.00880.0092-
2TPRCircleGoogLeNet-0.05320.0112
ResNet500.0532-0.6513
MobileNetV30.01120.6513-
2AUCCircleGoogLeNet-0.01590.4206
ResNet500.0159-0.1508
MobileNetV30.42060.1508-
3TPRTripletGoogLeNet-0.17060.0117
ResNet500.1706-0.0153
MobileNetV30.01170.0153-
3AUCTripletGoogLeNet-0.00790.0079
ResNet500.0079-1.0000
MobileNetV30.00791.0000-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marasović, T.; Papić, V. Few-Shot Breast Cancer Diagnosis Using a Siamese Neural Network Framework and Triplet-Based Loss. Algorithms 2025, 18, 567. https://doi.org/10.3390/a18090567

AMA Style

Marasović T, Papić V. Few-Shot Breast Cancer Diagnosis Using a Siamese Neural Network Framework and Triplet-Based Loss. Algorithms. 2025; 18(9):567. https://doi.org/10.3390/a18090567

Chicago/Turabian Style

Marasović, Tea, and Vladan Papić. 2025. "Few-Shot Breast Cancer Diagnosis Using a Siamese Neural Network Framework and Triplet-Based Loss" Algorithms 18, no. 9: 567. https://doi.org/10.3390/a18090567

APA Style

Marasović, T., & Papić, V. (2025). Few-Shot Breast Cancer Diagnosis Using a Siamese Neural Network Framework and Triplet-Based Loss. Algorithms, 18(9), 567. https://doi.org/10.3390/a18090567

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop