Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = automatic mask removal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1569 KB  
Article
A Summary of Pain Locations and Neuropathic Patterns Extracted Automatically from Patient Self-Reported Sensation Drawings
by Andrew Bishara, Elisabetta de Rinaldis, Trisha F. Hue, Thomas Peterson, Jennifer Cummings, Abel Torres-Espin, Jeannie F. Bailey, Jeffrey C. Lotz and REACH Investigators
Int. J. Environ. Res. Public Health 2025, 22(9), 1456; https://doi.org/10.3390/ijerph22091456 - 19 Sep 2025
Viewed by 517
Abstract
Background Chronic low-back pain (LBP) is the largest contributor to disability worldwide, yet many assessments still reduce a complex, spatially distributed condition to a single 0–10 score. Body-map drawings capture location and extent of pain, but manual digitization is too slow and inconsistent [...] Read more.
Background Chronic low-back pain (LBP) is the largest contributor to disability worldwide, yet many assessments still reduce a complex, spatially distributed condition to a single 0–10 score. Body-map drawings capture location and extent of pain, but manual digitization is too slow and inconsistent for large studies or real-time telehealth. Methods Paper pain drawings from 332 adults in the multicenter COMEBACK study (four University of California sites, March 2021–June 2023) were scanned to PDFs. A Python pipeline automatically (i) rasterized PDF pages with pdf2image v1.17.0; (ii) resized each scan and delineated anterior/posterior regions of interest; (iii) registered patient silhouettes to a canonical high-resolution template using ORB key-points, Brute-Force Hamming matching, RANSAC inlier selection, and 3 × 3 projective homography implemented in OpenCV; (iv) removed template outlines via adaptive Gaussian thresholding, Canny edge detection, and 3 × 3 dilation, leaving only patient-drawn strokes; (v) produced binary masks for pain, numbness, and pins-and-needles, then stacked these across subjects to create pixel-frequency matrices; and (vi) normalized matrices with min–max scaling and rendered heat maps. RGB composites assigned distinct channels to each sensation, enabling intuitive visualization of overlapping symptom distributions and for future data analyses. Results Cohort-level maps replicated classic low-back pain hotspots over lumbar paraspinals, gluteal fold, and posterior thighs, while exposing less-recognized clusters along the lateral hip and lower abdomen. Neuropathic-leaning drawings displayed broader leg involvement than purely nociceptive patterns. Conclusions Our automated workflow converts pen-on-paper pain drawings into machine-readable digitized images and heat maps at the population scale, laying practical groundwork for spatially informed, precision management of chronic LBP. Full article
Show Figures

Figure 1

23 pages, 63827 KB  
Article
A Two-Stage Weed Detection and Localization Method for Lily Fields Targeting Laser Weeding
by Yanlei Xu, Chao Liu, Jiahao Liang, Xiaomin Ji and Jian Li
Agriculture 2025, 15(18), 1967; https://doi.org/10.3390/agriculture15181967 - 18 Sep 2025
Viewed by 369
Abstract
The cultivation of edible lilies is highly susceptible to weed infestation during its growth period, and the application of herbicides is often impractical, leading to the rampant growth of diverse weed species. Laser weeding, recognized as an efficient and precise method for field [...] Read more.
The cultivation of edible lilies is highly susceptible to weed infestation during its growth period, and the application of herbicides is often impractical, leading to the rampant growth of diverse weed species. Laser weeding, recognized as an efficient and precise method for field weed management, presents a novel solution to the weed challenges in lily fields. The accurate localization of weed regions and the optimal selection of laser targeting points are crucial technologies for successful laser weeding implementation. In this study, we propose a two-stage weed detection and localization method specifically designed for lily fields. In the first stage, we introduce an enhanced detection model named YOLO-Morse, aimed at identifying and removing lily plants. YOLO-Morse is built upon the YOLOv8 architecture and integrates the RCS-MAS backbone, the SPD-Conv spatial enhancement module, and an adaptive focal loss function (ATFL) to enhance detection accuracy in conditions characterized by sample imbalance and complex backgrounds. Experimental results indicate that YOLO-morse achieves a mean Average Precision (mAP) of 86%, reflecting a 3.2% improvement over the original YOLOv8, and facilitates stable identification of lily regions. Subsequently, a ResNet-based segmentation network is employed to conduct semantic segmentation on the detected lily targets. The segmented results are utilized to mask the original lily areas in the image, thereby generating weed-only images for the subsequent stage. In the second stage, the original RGB field images are first converted into weed-only images by removing lily regions; these weed-only images are then analyzed in the HSV color space combined with morphological processing to precisely extract green weed regions. The centroid of the weed coordinate set is automatically determined as the laser targeting point.The proposed system exhibits superior performance in weed detection, achieving a Precision, Recall, and F1-score of 94.97%, 90.00%, and 92.42%, respectively. The proposed two-stage approach significantly enhances multi-weed detection performance in complex environments, improving detection accuracy while maintaining operational efficiency and cost-effectiveness. This method proposes a precise, efficient, and intelligent laser weeding solution for weed management in lily fields. Although certain limitations remain, such as environmental lighting variation, leaf occlusion, and computational resource constraints, the method still exhibits significant potential for broader application in other high-value crops. Full article
(This article belongs to the Special Issue Plant Diagnosis and Monitoring for Agricultural Production)
Show Figures

Figure 1

18 pages, 2065 KB  
Article
Phoneme-Aware Augmentation for Robust Cantonese ASR Under Low-Resource Conditions
by Lusheng Zhang, Shie Wu and Zhongxun Wang
Symmetry 2025, 17(9), 1478; https://doi.org/10.3390/sym17091478 - 8 Sep 2025
Viewed by 622
Abstract
Cantonese automatic speech recognition (ASR) faces persistent challenges due to its nine lexical tones, extensive phonological variation, and the scarcity of professionally transcribed corpora. To address these issues, we propose a lightweight and data-efficient framework that leverages weak phonetic supervision (WPS) in conjunction [...] Read more.
Cantonese automatic speech recognition (ASR) faces persistent challenges due to its nine lexical tones, extensive phonological variation, and the scarcity of professionally transcribed corpora. To address these issues, we propose a lightweight and data-efficient framework that leverages weak phonetic supervision (WPS) in conjunction with two pho-neme-aware augmentation strategies. (1) Dynamic Boundary-Aligned Phoneme Dropout progressively removes entire IPA segments according to a curriculum schedule, simulating real-world phenomena such as elision, lenition, and tonal drift while ensuring training stability. (2) Phoneme-Aware SpecAugment confines all time- and frequency-masking operations within phoneme boundaries and prioritizes high-attention regions, thereby preserving intra-phonemic contours and formant integrity. Built on the Whistle encoder—which integrates a Conformer backbone, Connectionist Temporal Classification–Conditional Random Field (CTC-CRF) alignment, and a multi-lingual phonetic space—the approach requires only a grapheme-to-phoneme lexicon and Montreal Forced Aligner outputs, without any additional manual labeling. Experiments on the Cantonese subset of Common Voice demonstrate consistent gains: Dynamic Dropout alone reduces phoneme error rate (PER) from 17.8% to 16.7% with 50 h of speech and 16.4% to 15.1% with 100 h, while the combination of the two augmentations further lowers PER to 15.9%/14.4%. These results confirm that structure-aware phoneme-level perturbations provide an effective and low-cost solution for building robust Cantonese ASR systems under low-resource conditions. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 3230 KB  
Article
Active Contours Connected Component Analysis Segmentation Method of Cancerous Lesions in Unsupervised Breast Histology Images
by Vincent Majanga, Ernest Mnkandla, Zenghui Wang and Donatien Koulla Moulla
Bioengineering 2025, 12(6), 642; https://doi.org/10.3390/bioengineering12060642 - 12 Jun 2025
Viewed by 716
Abstract
Automatic segmentation of nuclei on breast cancer histology images is a basic and important step for diagnosis in a computer-aided diagnostic approach and helps pathologists discover cancer early. Nuclei segmentation remains a challenging problem due to cancer biology and the variability of tissue [...] Read more.
Automatic segmentation of nuclei on breast cancer histology images is a basic and important step for diagnosis in a computer-aided diagnostic approach and helps pathologists discover cancer early. Nuclei segmentation remains a challenging problem due to cancer biology and the variability of tissue characteristics; thus, their detection in an image is a very tedious and time-consuming task. In this context, overlapping nuclei objects present difficulties in separating them by conventional segmentation methods; thus, active contours can be employed in image segmentation. A major limitation of the active contours method is its inability to resolve image boundaries/edges of intersecting objects and segment multiple overlapping objects as a single object. Therefore, we present a hybrid active contour (connected component + active contours) method to segment cancerous lesions in unsupervised human breast histology images. Initially, this approach prepares and pre-processes data through various augmentation methods to increase the dataset size. Then, a stain normalization technique is applied to these augmented images to isolate nuclei features from tissue structures. Secondly, morphology operation techniques, namely erosion, dilation, opening, and distance transform, are used to highlight foreground and background pixels while removing overlapping regions from the highlighted nuclei objects on the image. Consequently, the connected components method groups these highlighted pixel components with similar intensity values and assigns them to their relevant labeled component to form a binary mask. Once all binary-masked groups have been determined, a deep-learning recurrent neural network (RNN) model from the Keras architecture uses this information to automatically segment nuclei objects having cancerous lesions on the image via the active contours method. This approach, therefore, uses the capabilities of connected components analysis to solve the limitations of the active contour method. This segmentation method is evaluated on an unsupervised, augmented human breast cancer histology dataset of 15,179 images. This proposed method produced a significant evaluation result of 98.71% accuracy score. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

24 pages, 2991 KB  
Article
Automatic Blob Detection Method for Cancerous Lesions in Unsupervised Breast Histology Images
by Vincent Majanga, Ernest Mnkandla, Zenghui Wang and Donatien Koulla Moulla
Bioengineering 2025, 12(4), 364; https://doi.org/10.3390/bioengineering12040364 - 31 Mar 2025
Viewed by 911
Abstract
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the [...] Read more.
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the visual inspection of medical images, which is ineffective, particularly for large and visible cancerous lesions in such images. Additionally, conventional methods face challenges in analyzing objects in large images due to overlapping/intersecting objects and the inability to resolve their image boundaries/edges. Nevertheless, the early detection of breast cancer lesions is a key determinant for diagnosis and treatment. In this study, we present a deep learning-based technique for breast cancer lesion detection, namely blob detection, which automatically detects hidden and inaccessible cancerous lesions in unsupervised human breast histology images. Initially, this approach prepares and pre-processes data through various augmentation methods to increase the dataset size. Secondly, a stain normalization technique is applied to the augmented images to separate nucleus features from tissue structures. Thirdly, morphology operation techniques, namely erosion, dilation, opening, and a distance transform, are used to enhance the images by highlighting foreground and background pixels while removing overlapping regions from the highlighted nucleus objects in the image. Subsequently, image segmentation is handled via the connected components method, which groups highlighted pixel components with similar intensity values and assigns them to their relevant labeled components (binary masks). These binary masks are then used in the active contours method for further segmentation by highlighting the boundaries/edges of ROIs. Finally, a deep learning recurrent neural network (RNN) model automatically detects and extracts cancerous lesions and their edges from the histology images via the blob detection method. This proposed approach utilizes the capabilities of both the connected components method and the active contours method to resolve the limitations of blob detection. This detection method is evaluated on 27,249 unsupervised, augmented human breast cancer histology dataset images, and it shows a significant evaluation result in the form of a 98.82% F1 accuracy score. Full article
Show Figures

Figure 1

15 pages, 2289 KB  
Article
Automatic Watershed Segmentation of Cancerous Lesions in Unsupervised Breast Histology Images
by Vincent Majanga and Ernest Mnkandla
Appl. Sci. 2024, 14(22), 10394; https://doi.org/10.3390/app142210394 - 12 Nov 2024
Cited by 2 | Viewed by 1607
Abstract
Segmentation of nuclei in histology images is key in analyzing and quantifying morphology changes of nuclei features and tissue structures. Conventional diagnosis, segmenting, and detection methods have relied heavily on the manual-visual inspection of histology images. These methods are only effective on clearly [...] Read more.
Segmentation of nuclei in histology images is key in analyzing and quantifying morphology changes of nuclei features and tissue structures. Conventional diagnosis, segmenting, and detection methods have relied heavily on the manual-visual inspection of histology images. These methods are only effective on clearly visible cancerous lesions on histology images thus limited in their performance due to the complexity of tissue structures in histology images. Hence, early detection of breast cancer is key for treatment and profits from Computer-Aided-Diagnostic (CAD) systems introduced to efficiently and automatically segment and detect nuclei cells in pathology. This paper proposes, an automatic watershed segmentation method of cancerous lesions in unsupervised human breast histology images. Firstly, this approach pre-processes data through various augmentation methods to increase the size of dataset images, then a stain normalization technique is applied to these augmented images to isolate nuclei features from tissue structures. Secondly, data enhancement techniques namely; erosion, dilation, and distance transform are used to highlight foreground and background pixels while removing unwanted regions from the highlighted nuclei objects on the image. Consequently, the connected components method groups these highlighted pixel components with similar intensity values and, assigns them to their relevant labeled component binary mask. Once all binary masked groups have been determined, a deep-learning recurrent neural network from the Keras architecture uses this information to automatically segment nuclei objects with cancerous lesions and their edges on the image via watershed filling. This segmentation method is evaluated on an unsupervised, augmented human breast cancer histology dataset of 11,151 images. This proposed method produced a significant evaluation result of 98% F1-accuracy score. Full article
Show Figures

Figure 1

20 pages, 7605 KB  
Article
A Novel Adversarial Example Detection Method Based on Frequency Domain Reconstruction for Image Sensors
by Shuaina Huang, Zhiyong Zhang and Bin Song
Sensors 2024, 24(17), 5507; https://doi.org/10.3390/s24175507 - 25 Aug 2024
Cited by 2 | Viewed by 3787
Abstract
Convolutional neural networks (CNNs) have been extensively used in numerous remote sensing image detection tasks owing to their exceptional performance. Nevertheless, CNNs are often vulnerable to adversarial examples, limiting the uses in different safety-critical scenarios. Recently, how to efficiently detect adversarial examples and [...] Read more.
Convolutional neural networks (CNNs) have been extensively used in numerous remote sensing image detection tasks owing to their exceptional performance. Nevertheless, CNNs are often vulnerable to adversarial examples, limiting the uses in different safety-critical scenarios. Recently, how to efficiently detect adversarial examples and improve the robustness of CNNs has drawn considerable focus. The existing adversarial example detection methods require modifying CNNs, which not only affects the model performance but also greatly enhances training cost. With the purpose of solving these problems, this study proposes a detection algorithm for adversarial examples that does not need modification of the CNN models and can simultaneously retain the classification accuracy of normal examples. Specifically, we design a method to detect adversarial examples using frequency domain reconstruction. After converting the input adversarial examples into the frequency domain by Fourier transform, the adversarial disturbance from adversarial attacks can be eliminated by modifying the frequency of the example. The inverse Fourier transform is then used to maximize the recovery of the original example. Firstly, we train a CNN to reconstruct input examples. Then, we insert Fourier transform, convolution operation, and inverse Fourier transform into the features of the input examples to automatically filter out adversarial frequencies. We refer to our proposed method as FDR (frequency domain reconstruction), which removes adversarial interference by converting input samples into frequency and reconstructing them back into the spatial domain to restore the image. In addition, we also introduce gradient masking into the proposed FDR method to enhance the detection accuracy of the model for complex adversarial examples. We conduct extensive experiments on five mainstream adversarial attacks on three benchmark datasets, and the experimental results show that FDR can outperform state-of-the-art solutions in detecting adversarial examples. Additionally, FDR does not require any modifications to the detector and can be integrated with other adversarial example detection methods to be installed in sensing devices to ensure detection safety. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 63242 KB  
Article
Crater Detection and Population Statistics in Tianwen-1 Landing Area Based on Segment Anything Model (SAM)
by Yaqi Zhao and Hongxia Ye
Remote Sens. 2024, 16(10), 1743; https://doi.org/10.3390/rs16101743 - 14 May 2024
Cited by 3 | Viewed by 2177
Abstract
Crater detection is useful for research into dating a planetary surface’s age and geological mapping. The high-resolution imaging camera (HiRIC) carried by the Tianwen-1 rover provides digital image model (DIM) datasets with a resolution of 0.7 m/pixel, which are suitable for detecting meter-scale [...] Read more.
Crater detection is useful for research into dating a planetary surface’s age and geological mapping. The high-resolution imaging camera (HiRIC) carried by the Tianwen-1 rover provides digital image model (DIM) datasets with a resolution of 0.7 m/pixel, which are suitable for detecting meter-scale craters. The existing deep-learning-based automatic crater detection algorithms require a large number of crater annotation datasets for training. However, there is currently a lack of datasets of optical images of small-sized craters. In this study, we propose a model based on the Segment Anything Model (SAM) to detect craters in Tianwen-1’s landing area and perform statistical analysis. The SAM network was used to obtain a segmentation mask of the craters from the DIM images. Then non-circular filtering was used to filter out irregular craters. Finally, deduplication and removal of false positives were performed to obtain accurate circular craters, and their center’s position and diameter were obtained through circular fitting analysis. We extracted 841,727 craters in total, with diameters ranging from 1.57 m to 7910.47 m. These data are useful for further Martian crater catalogs and crater datasets. Additionally, the crater size–frequency distribution (CSFD) was also analyzed, indicating that the surface ages of the Tianwen-1 landing area are ~3.25 billion years, with subsequent surface resurfacing events occurring ~1.67 billion years ago. Full article
(This article belongs to the Special Issue Planetary Geologic Mapping and Remote Sensing (Second Edition))
Show Figures

Figure 1

16 pages, 1308 KB  
Article
Classification of Rainfall Intensity and Cloud Type from Dash Cam Images Using Feature Removal by Masking
by Kodai Suemitsu, Satoshi Endo and Shunsuke Sato
Climate 2024, 12(5), 70; https://doi.org/10.3390/cli12050070 - 12 May 2024
Cited by 1 | Viewed by 3561
Abstract
Weather Report is an initiative from Weathernews Inc. to obtain sky images and current weather conditions from the users of its weather app. This approach can provide supplementary weather information to radar observations and can potentially improve the accuracy of forecasts However, since [...] Read more.
Weather Report is an initiative from Weathernews Inc. to obtain sky images and current weather conditions from the users of its weather app. This approach can provide supplementary weather information to radar observations and can potentially improve the accuracy of forecasts However, since the time and location of the contributed images are limited, gathering data from different sources is also necessary. This study proposes a system that automatically submits weather reports using a dash cam with communication capabilities and image recognition technology. This system aims to provide detailed weather information by classifying rainfall intensities and cloud formations from images captured via dash cams. In models for fine-grained image classification tasks, there are very subtle differences between some classes and only a few samples per class. Therefore, they tend to include irrelevant details, such as the background, during training, leading to bias. One solution is to remove useless features from images by masking them using semantic segmentation, and then train each masked dataset using EfficientNet, evaluating the resulting accuracy. In the classification of rainfall intensity, the model utilizing the features of the entire image achieved up to 92.61% accuracy, which is 2.84% higher compared to the model trained specifically on road features. This outcome suggests the significance of considering information from the whole image to determine rainfall intensity. Furthermore, analysis using the Grad-CAM visualization technique revealed that classifiers trained on masked dash cam images particularly focused on car headlights when classifying the rainfall intensity. For cloud type classification, the model focusing solely on the sky region attained an accuracy of 68.61%, which is 3.16% higher than that of the model trained on the entire image. This indicates that concentrating on the features of clouds and the sky enables more accurate classification and that eliminating irrelevant areas reduces misclassifications. Full article
(This article belongs to the Special Issue Extreme Weather Detection, Attribution and Adaptation Design)
Show Figures

Figure 1

21 pages, 20756 KB  
Article
A Novel Method for Cloud and Cloud Shadow Detection Based on the Maximum and Minimum Values of Sentinel-2 Time Series Images
by Kewen Liang, Gang Yang, Yangyan Zuo, Jiahui Chen, Weiwei Sun, Xiangchao Meng and Binjie Chen
Remote Sens. 2024, 16(8), 1392; https://doi.org/10.3390/rs16081392 - 15 Apr 2024
Cited by 9 | Viewed by 4972
Abstract
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is [...] Read more.
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is employed as a preliminary mask for clouds and cloud shadows. Secondly, we calculate the ratio of the maximum and sub-maximum values of the blue band in the time series, as well as the ratio of the minimum and sub-minimum values of the near-infrared band in the time series, to eliminate noise from the time series data. Finally, the maximum value of the clear blue band and the minimum value of the near-infrared band after noise removal are employed for cloud and cloud shadow detection, respectively. A national and a global dataset were used to validate the TSMM, and it was quantitatively compared against five other advanced methods or products. When clouds and cloud shadows are detected simultaneously, in the S2ccs dataset, the overall accuracy (OA) reaches 0.93 and the F1 score reaches 0.85. Compared with the most advanced CS+S2, there are increases of 3% and 9%, respectively. In the CloudSEN12 dataset, compared with CS+S2, the producer’s accuracy (PA) and F1 score show increases of 10% and 4%, respectively. Additionally, when applied to Landsat-8 images, TSMM outperforms Fmask, demonstrating its strong generalization capability. Full article
(This article belongs to the Special Issue Satellite-Based Cloud Climatologies)
Show Figures

Graphical abstract

22 pages, 32270 KB  
Article
A Cloud Coverage Image Reconstruction Approach for Remote Sensing of Temperature and Vegetation in Amazon Rainforest
by Emili Bezerra, Salomão Mafalda, Ana Beatriz Alvarez, Diego Armando Uman-Flores, William Isaac Perez-Torres and Facundo Palomino-Quispe
Appl. Sci. 2023, 13(23), 12900; https://doi.org/10.3390/app132312900 - 1 Dec 2023
Cited by 6 | Viewed by 2868
Abstract
Remote sensing involves actions to obtain information about an area located on Earth. In the Amazon region, the presence of clouds is a common occurrence, and the visualization of important terrestrial information in the image, like vegetation and temperature, can be difficult. In [...] Read more.
Remote sensing involves actions to obtain information about an area located on Earth. In the Amazon region, the presence of clouds is a common occurrence, and the visualization of important terrestrial information in the image, like vegetation and temperature, can be difficult. In order to estimate land surface temperature (LST) and the normalized difference vegetation index (NDVI) from satellite images with cloud coverage, the inpainting approach will be applied to remove clouds and restore the image of the removed region. This paper proposes the use of the neural network LaMa (large mask inpainting) and the scalable model named Big LaMa for the automatic reconstruction process in satellite images. Experiments are conducted on Landsat-8 satellite images of the Amazon rainforest in the state of Acre, Brazil. To evaluate the architecture’s accuracy, the RMSE (root mean squared error), SSIM (structural similarity index) and PSNR (peak signal-to-noise ratio) metrics were used. The LST and NDVI of the reconstructed image were calculated and compared qualitatively and quantitatively, using scatter plots and the chosen metrics, respectively. The experimental results show that the Big LaMa architecture performs more effectively and robustly in restoring images in terms of visual quality. And the LaMa network shows minimal superiority for the measured metrics when addressing medium marked areas. When comparing the results achieved in NDVI and LST of the reconstructed images with real cloud coverage, great visual results were obtained with Big LaMa. Full article
(This article belongs to the Special Issue Advanced Remote Sensing Imaging for Environmental Sciences)
Show Figures

Figure 1

16 pages, 742 KB  
Article
REKP: Refined External Knowledge into Prompt-Tuning for Few-Shot Text Classification
by Yuzhuo Dang, Weijie Chen, Xin Zhang and Honghui Chen
Mathematics 2023, 11(23), 4780; https://doi.org/10.3390/math11234780 - 27 Nov 2023
Cited by 1 | Viewed by 1757
Abstract
Text classification is a machine learning technique employed to assign a given text to predefined categories, facilitating the automatic analysis and processing of textual data. However, an important problem is that the number of new text categories is growing faster than that of [...] Read more.
Text classification is a machine learning technique employed to assign a given text to predefined categories, facilitating the automatic analysis and processing of textual data. However, an important problem is that the number of new text categories is growing faster than that of human annotation data, which makes many new categories of text data lack a lot of annotation data. As a result, the conventional deep neural network is forced to over-fit, which damages the application in the real world. As a solution to this problem, academics recommend addressing data scarcity through few-shot learning. One of the efficient methods is prompt-tuning, which transforms the input text into a mask prediction problem featuring [MASK]. By utilizing descriptors, the model maps output words to labels, enabling accurate prediction. Nevertheless, the previous prompt-based adaption approaches often relied on manually produced verbalizers or a single label to represent the entire label vocabulary, which makes the mapping granularity low, resulting in words not being accurately mapped to their label. To address these issues, we propose to enhance the verbalizer and construct the refined external knowledge into a prompt-tuning (REKP) model. We employ the external knowledge bases to increase the mapping space of tagged terms and design three refinement methods to remove noise data. We conduct comprehensive experiments on four benchmark datasets, namely AG’s News, Yahoo, IMDB, and Amazon. The results demonstrate that REKP can outperform the state-of-the-art baselines in terms of Micro-F1 on knowledge-enhanced text classification. In addition, we conduct an ablation study to ascertain the functionality of each module in our model, revealing that the refinement module significantly contributes to enhancing classification accuracy. Full article
Show Figures

Figure 1

12 pages, 485 KB  
Article
Habitual Mask Wearing as Part of COVID-19 Control in Japan: An Assessment Using the Self-Report Habit Index
by Tianwen Li, Marie Fujimoto, Katsuma Hayashi, Asami Anzai and Hiroshi Nishiura
Behav. Sci. 2023, 13(11), 951; https://doi.org/10.3390/bs13110951 - 19 Nov 2023
Cited by 7 | Viewed by 4684
Abstract
Although the Japanese government removed mask-wearing requirements in 2023, relatively high rates of mask wearing have continued in Japan. We aimed to assess psychological reasons and the strength of habitual mask wearing in Japan. An Internet-based cross-sectional survey was conducted with non-random participant [...] Read more.
Although the Japanese government removed mask-wearing requirements in 2023, relatively high rates of mask wearing have continued in Japan. We aimed to assess psychological reasons and the strength of habitual mask wearing in Japan. An Internet-based cross-sectional survey was conducted with non-random participant recruitment. We explored the frequency of mask usage, investigating psychological reasons for wearing masks. A regression analysis examined the association between psychological reasons and the frequency of mask wearing. The habitual use of masks was assessed in the participant’s most frequently visited indoor space and public transport using the self-report habit index. The principal component analysis with varimax rotation revealed distinct habitual characteristics. Among the 2640 participants surveyed from 6 to 9 February 2023, only 4.9% reported not wearing masks at all. Conformity to social norms was the most important reason for masks. Participants exhibited a slightly higher degree of habituation towards mask wearing on public transport compared to indoor spaces. The mask-wearing rate was higher in females than in males, and no significant difference was identified by age group. Daily mask wearing in indoor spaces was characterized by two traits (automaticity and behavioral frequency). A high mask-wearing frequency has been maintained in Japan during the social reopening transition period. Mask wearing has become a part of daily habit, especially on public transport, largely driven by automatic and frequent practice. Full article
(This article belongs to the Special Issue Health Psychology and Behaviors during COVID-19)
Show Figures

Figure 1

19 pages, 8540 KB  
Article
Bone Metastases Lesion Segmentation on Breast Cancer Bone Scan Images with Negative Sample Training
by Yi-You Chen, Po-Nien Yu, Yung-Chi Lai, Te-Chun Hsieh and Da-Chuan Cheng
Diagnostics 2023, 13(19), 3042; https://doi.org/10.3390/diagnostics13193042 - 25 Sep 2023
Cited by 5 | Viewed by 5146
Abstract
The use of deep learning methods for the automatic detection and quantification of bone metastases in bone scan images holds significant clinical value. A fast and accurate automated system for segmenting bone metastatic lesions can assist clinical physicians in diagnosis. In this study, [...] Read more.
The use of deep learning methods for the automatic detection and quantification of bone metastases in bone scan images holds significant clinical value. A fast and accurate automated system for segmenting bone metastatic lesions can assist clinical physicians in diagnosis. In this study, a small internal dataset comprising 100 breast cancer patients (90 cases of bone metastasis and 10 cases of non-metastasis) and 100 prostate cancer patients (50 cases of bone metastasis and 50 cases of non-metastasis) was used for model training. Initially, all image labels were binary. We used the Otsu thresholding method or negative mining to generate a non-metastasis mask, thereby transforming the image labels into three classes. We adopted the Double U-Net as the baseline model and made modifications to its output activation function. We changed the activation function to SoftMax to accommodate multi-class segmentation. Several methods were used to enhance model performance, including background pre-processing to remove background information, adding negative samples to improve model precision, and using transfer learning to leverage shared features between two datasets, which enhances the model’s performance. The performance was investigated via 10-fold cross-validation and computed on a pixel-level scale. The best model we achieved had a precision of 69.96%, a sensitivity of 63.55%, and an F1-score of 66.60%. Compared to the baseline model, this represents an 8.40% improvement in precision, a 0.56% improvement in sensitivity, and a 4.33% improvement in the F1-score. The developed system has the potential to provide pre-diagnostic reports for physicians in final decisions and the calculation of the bone scan index (BSI) with the combination with bone skeleton segmentation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging)
Show Figures

Figure 1

26 pages, 31605 KB  
Article
An Automatic Method for Rice Mapping Based on Phenological Features with Sentinel-1 Time-Series Images
by Guixiang Tian, Heping Li, Qi Jiang, Baojun Qiao, Ning Li, Zhengwei Guo, Jianhui Zhao and Huijin Yang
Remote Sens. 2023, 15(11), 2785; https://doi.org/10.3390/rs15112785 - 26 May 2023
Cited by 11 | Viewed by 3509
Abstract
Rice is one of the most important staple foods in the world, feeding more than 50% of the global population. However, rice is also a significant emitter of greenhouse gases and plays a role in global climate change. As a result, quickly and [...] Read more.
Rice is one of the most important staple foods in the world, feeding more than 50% of the global population. However, rice is also a significant emitter of greenhouse gases and plays a role in global climate change. As a result, quickly and accurately obtaining the rice mapping is crucial for ensuring global food security and mitigating global warming. In this study, we proposed an automated rice mapping method called automated rice mapping using V-shaped phenological features of rice (Auto-RMVPF) based on the time-series Sentinel-1A images, which are composed of four main steps. First, the dynamic threshold method automatically extracts abundant rice samples by flooding signals. Second, the second-order difference method automatically extracts the phenological period of rice based on the scattering feature of rice samples. Then, the key “V” feature of the VH backscatter time series, which rises before and after rice transplanting due to flooding, is used for rice mapping. Finally, the farmland mask is extracted to avoid interference from non-farmland features on the rice map, and the median filter is applied to remove noise from the rice map and obtain the final spatial distribution of rice. The results show that the Auto-RMVPF method not only can automatically obtain abundant rice samples but also can extract the accurate phenological period of rice. At the same time, the accuracy of rice mapping is also satisfactory, with an overall accuracy is more than 95% and an F1 score of over 0.91. The overall accuracy of the Auto-RMVPF method is improved by 2.8–12.2% compared with support vector machine (SVM) with an overall accuracy of 89.9% (25 training samples) and 92.2% (124 training samples), random forest (RF) with an overall accuracy of 82.8% (25 training samples) and 88.3% (124 training samples), and automated rice mapping using synthetic aperture radar flooding signals (ARM-SARFS) with an overall accuracy of 89.9%. Altogether, these experimental results suggest that the Auto-RMVPF method has broad prospects for automatic rice mapping, especially for mountainous regions where ground samples are often not easily accessible. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop