Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (546)

Search Parameters:
Keywords = smartphone-based imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 6143 KB  
Article
Precision Livestock Farming: YOLOv12-Based Automated Detection of Keel Bone Lesions in Laying Hens
by Tommaso Bergamasco, Aurora Ambrosi, Vittoria Tregnaghi, Rachele Urbani, Giacomo Nalesso, Francesca Menegon, Angela Trocino, Mattia Pravato, Francesco Bordignon, Stefania Sparesato, Grazia Manca and Guido Di Martino
Poultry 2025, 4(4), 43; https://doi.org/10.3390/poultry4040043 - 24 Sep 2025
Viewed by 69
Abstract
Keel bone lesions (KBLs) represent a relevant welfare concern in laying hens, arising from complex interactions among genetics, housing systems, and management practices. This study presents the development of an image analysis system for the automated detection and classification of KBLs in slaughterhouse [...] Read more.
Keel bone lesions (KBLs) represent a relevant welfare concern in laying hens, arising from complex interactions among genetics, housing systems, and management practices. This study presents the development of an image analysis system for the automated detection and classification of KBLs in slaughterhouse videos, enabling scalable and retrospective welfare assessment. In addition to lesion classification, the system can track and count individual carcasses, providing estimates of the total number of specimens with and without significant lesions. Videos of brown laying hens from a commercial slaughterhouse in northeastern Italy were recorded on the processing line using a smartphone. Six hundred frames were extracted and annotated by three independent observers using a three-scale scoring system. A dataset was constructed by combining the original frames with crops centered on the keel area. To address class imbalance, samples of class 1 (damaged keel bones) were augmented by a factor of nine, compared to a factor of three for class 0 (no or mild lesion). A YOLO-based model was trained for both detection and classification tasks. The model achieved an F1 score of 0.85 and a mAP@0.5 of 0.892. A BoT-SORT tracker was evaluated against human annotations on a 5 min video, achieving an F1 score of 0.882 for the classification task. Potential improvements include increasing the number and variability of annotated images, refining annotation protocols, and enhancing model performance under varying slaughterhouse lighting and positioning conditions. The model could be applied in routine slaughter inspections to support welfare assessment in large populations of animals. Full article
Show Figures

Figure 1

21 pages, 10100 KB  
Article
Real-Time Identification of Mixed and Partly Covered Foreign Currency Using YOLOv11 Object Detection
by Nanda Fanzury and Mintae Hwang
AI 2025, 6(10), 241; https://doi.org/10.3390/ai6100241 - 24 Sep 2025
Viewed by 118
Abstract
Background: This study presents a real-time mobile system for identifying mixed and partly covered foreign coins and banknotes using the You Only Look Once version 11 (YOLOv11) deep learning framework. The proposed system addresses practical challenges faced by travelers and visually impaired individuals [...] Read more.
Background: This study presents a real-time mobile system for identifying mixed and partly covered foreign coins and banknotes using the You Only Look Once version 11 (YOLOv11) deep learning framework. The proposed system addresses practical challenges faced by travelers and visually impaired individuals when handling multiple currencies. Methods: The system introduces three novel aspects: (i) simultaneous recognition of both coins and banknotes from multiple currencies within a single image, even when items are overlapping or occluded; (ii) a hybrid inference strategy that integrates an embedded TensorFlow Lite (TFLite) model for on-device detection with an optional server-assisted mode for higher accuracy; and (iii) an integrated currency conversion module that provides real-time value translation based on current exchange rates. A purpose-build dataset containing 46 denominations classes across four major currencies: US Dollar (USD), Euro (EUR), Chinese Yuan (CNY), and Korean Won (KRW), was used for training, including challenging cases of overlap, folding, and partial coverage. Results: Experimental evaluation demonstrated robust performance under diverse real-world conditions. The system achieved high detection accuracy and low latency, confirming its suitability for practical deployment on consumer-grade smartphones. Conclusions: These findings confirm that the proposed approach achieves an effective balance between portability, robustness, and detection accuracy, making it a viable solution for real-time mixed currency recognition in everyday scenarios. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

18 pages, 2713 KB  
Article
Optimization of Smartphone-Based Strain Measurement Algorithm Utilizing Arc-Support Line Segments
by Qiwen Cui, Changfei Gou, Shengan Lu and Botao Xie
Buildings 2025, 15(18), 3407; https://doi.org/10.3390/buildings15183407 - 20 Sep 2025
Viewed by 217
Abstract
Smartphone-based strain monitoring of structural components is an emerging approach to structural health monitoring. However, the existing techniques suffer from limited accuracy and poor cross-device adaptability. This study aims to optimize the smartphone-based Micro Image Strain Sensing (MISS) method by replacing the traditional [...] Read more.
Smartphone-based strain monitoring of structural components is an emerging approach to structural health monitoring. However, the existing techniques suffer from limited accuracy and poor cross-device adaptability. This study aims to optimize the smartphone-based Micro Image Strain Sensing (MISS) method by replacing the traditional Connected Component Labeling (CCL) algorithm with the arc-support line segments (ASLS) algorithm, thereby significantly enhancing the stability and adaptability of circle detection in micro-images captured by diverse smartphones. Additionally, this study evaluates the impact of lighting conditions and lens distortion on the optimized MISS method. The experimental results demonstrate that the ASLS algorithm outperforms CCL in terms of recognition accuracy (maximum error of 0.94%) and cross-device adaptability, exhibiting greater robustness against color temperature and focal length variations. Under fluctuating lighting conditions, the strain measurement noise remains within ±0.5 με and with a maximum error of 7.0 με compared to LVDT measurements, indicating the strong adaptability of the optimized MISS method to external light changes. Barrel distortion in microscopic images induces a maximum pixel error of 5.66%, yet the final optimized MISS method achieves highly accurate strain measurements. The optimized MISS method significantly improves measurement stability and engineering applicability, enabling effective large-scale implementation for strain monitoring of civil infrastructure. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

24 pages, 11967 KB  
Article
Smartphone-Based Edge Intelligence for Nighttime Visibility Estimation in Smart Cities
by Chengyuan Duan and Shiqi Yao
Electronics 2025, 14(18), 3642; https://doi.org/10.3390/electronics14183642 - 15 Sep 2025
Viewed by 343
Abstract
Impaired visibility, a major global environmental threat, is a result of light scattering by atmospheric particulate matter. While digital photographs are increasingly used for daytime visibility estimation, such methods are largely ineffective at night owing to the different scattering effects. Here, we introduce [...] Read more.
Impaired visibility, a major global environmental threat, is a result of light scattering by atmospheric particulate matter. While digital photographs are increasingly used for daytime visibility estimation, such methods are largely ineffective at night owing to the different scattering effects. Here, we introduce an image-based algorithm for inferring nighttime visibility from a single photograph by analyzing the forward scattering index and optical thickness retrieved from glow effects around light sources. Using photographs crawled from social media platforms across mainland China, we estimated the nationwide visibility for one year using the proposed algorithm, achieving high goodness-of-fit values (R2 = 0.757; RMSE = 4.318 km), demonstrating robust performance under various nighttime scenarios. The model also captures both chronic and episodic visibility degradation, including localized pollution events. These results highlight the potential of using ubiquitous smartphone photography as a low-cost, scalable, and real-time sensing solution for nighttime atmospheric monitoring in urban areas. Full article
(This article belongs to the Special Issue Advanced Edge Intelligence in Smart Environments)
Show Figures

Figure 1

13 pages, 2763 KB  
Article
Structural Deflection Measurement with a Single Smartphone Using a New Scale Factor Calibration Method
by Long Tian, Yangxiang Yuan, Liping Yu and Xinyue Zhang
Infrastructures 2025, 10(9), 238; https://doi.org/10.3390/infrastructures10090238 - 10 Sep 2025
Viewed by 301
Abstract
This study proposes a novel structural deflection measurement method using a single smartphone with an innovative scale factor (SF) calibration technique, eliminating reliance on laser rangefinders and industrial cameras. Conventional off-axis digital image correlation (DIC) techniques require laser rangefinders to measure discrete points [...] Read more.
This study proposes a novel structural deflection measurement method using a single smartphone with an innovative scale factor (SF) calibration technique, eliminating reliance on laser rangefinders and industrial cameras. Conventional off-axis digital image correlation (DIC) techniques require laser rangefinders to measure discrete points for SF calculation, suffering from high hardware costs and sunlight-induced ranging failures. The proposed approach replaces physical ranging by deriving SF through geometric relationships of known structural dimensions (e.g., bridge length/width) within the measured plane. A key innovation lies in developing a versatile SF calibration framework adaptable to varying numbers of reference dimensions: a non-optimized calculation integrates smartphone gyroscope-measured 3D angles when only one dimension is available; a local optimization model with angular parameters enhances accuracy for 2–3 known dimensions; and a global optimization model employing spatial constraints achieves precise SF resolution with ≥4 reference dimensions. Indoor experiments demonstrated sub-0.05 m ranging accuracy and deflection errors below 0.30 mm. Field validations on Beijing Subway Line 13′s bridge successfully captured dynamic load-induced deformations, confirming outdoor applicability. This smartphone-based method reduces costs compared to traditional setups while overcoming sunlight interference, establishing a hardware-adaptive solution for vision-based structural health monitoring. Full article
Show Figures

Figure 1

15 pages, 3317 KB  
Article
Estimation of Growth Parameters of Eustoma grandiflorum Using Smartphone 3D Scanner
by Ryusei Yanagita, Hiroki Naito, Yoshimichi Yamashita and Fumiki Hosoi
Eng 2025, 6(9), 232; https://doi.org/10.3390/eng6090232 - 5 Sep 2025
Viewed by 1738
Abstract
Since the Great East Japan Earthquake, floriculture has expanded in Namie Town, Fukushima Prefecture, as part of agricultural recovery. Growth surveys are essential for floriculture production, cultivation management, and trials as they help assess plant growth. However, these surveys are labor-intensive, and the [...] Read more.
Since the Great East Japan Earthquake, floriculture has expanded in Namie Town, Fukushima Prefecture, as part of agricultural recovery. Growth surveys are essential for floriculture production, cultivation management, and trials as they help assess plant growth. However, these surveys are labor-intensive, and the standards used can vary owing to subjective judgments and individual differences. To address this issue, image-processing technologies are expected to enable more consistent and objective evaluations. In this study, we explored image processing in growth surveys by estimating plant growth parameters from three-dimensional (3D) point clouds acquired using a smartphone-based 3D scanner. Focusing on lisianthus (Eustoma grandiflorum), we estimated the plant height and the number of nodes above the bolting. The results showed that plant height could be estimated with high accuracy, with a root mean square error (RMSE) of 1.2 cm. By contrast, the node number estimation showed a mean error exceeding one node. This error was attributed to the challenges in handling variations in point cloud density, which stem from the 3D point cloud generation method and leaf occlusion caused by dense foliage. Future work should focus on developing analysis methods that are robust to point-cloud density and capable of handling complex vegetative structures. Full article
Show Figures

Figure 1

14 pages, 4225 KB  
Article
Portable Bacterial Cellulose-Based Fluorescent Sensor for Rapid and Sensitive Detection of Copper in Food and Environmental Samples
by Hongyuan Zhang, Qian Zhang, Xiaona Ji, Bing Han, Jieqiong Wang and Ce Han
Molecules 2025, 30(17), 3633; https://doi.org/10.3390/molecules30173633 - 5 Sep 2025
Viewed by 965
Abstract
Copper ions (Cu2+), indispensable in physiological processes yet toxic at elevated concentrations, require sensitive on-site monitoring. Here, a portable fluorescent sensing film (Y-CDs@BCM) was fabricated by anchoring yellow-emitting carbon dots (Y-CDs) into bacterial cellulose films, which enables rapid and sensitive detection [...] Read more.
Copper ions (Cu2+), indispensable in physiological processes yet toxic at elevated concentrations, require sensitive on-site monitoring. Here, a portable fluorescent sensing film (Y-CDs@BCM) was fabricated by anchoring yellow-emitting carbon dots (Y-CDs) into bacterial cellulose films, which enables rapid and sensitive detection of Cu2+ in complex real-world samples. The yellow fluorescent carbon dots (Y-CDs) were synthesized with the aid of o-phenylenediamine and 1-octyl-3-methylimidazolium tetrafluoroborate as precursors, exhibiting excellent fluorescence stability. The fluorescence of Y-CDs was selectively quenched by Cu2+ via the inner filter effect (IFE), allowing quantitative analysis with superior sensitivity compared to existing methods. By adding bacterial cellulose (BC) as a solid support, aggregation-induced fluorescence quenching was effectively reduced, and sensor robustness and portability were improved. Through smartphone-based colorimetric analysis, the Y-CDs@BCM sensor enabled rapid, visual interpretation of Cu2+ detection (within 1 min). Furthermore, cell viability and in vivo assays confirmed the biocompatibility of Y-CDs, indicating their suitability for biological imaging. This work presents an environmentally friendly, reliable, and practical method for on-site Cu2+ monitoring, emphasizing its broad application potential in food safety control and environmental analysis. Full article
(This article belongs to the Special Issue Applications of Fluorescent Sensors in Food and Environment)
Show Figures

Figure 1

15 pages, 1460 KB  
Article
Smartphone-Based 3D Surface Imaging: A New Frontier in Digital Breast Assessment? Smartphone-Based Breast Assessment
by Nikolas Chrobot, Philipp Unbehaun, Konstantin Frank, Michael Alfertshofer, Wenko Smolka, Tobias Ettl, Alexandra Anker, Lukas Prantl, Vanessa Brébant and Robin Hartmann
J. Clin. Med. 2025, 14(17), 6233; https://doi.org/10.3390/jcm14176233 - 3 Sep 2025
Viewed by 455
Abstract
Background: Three-dimensional surface imaging is widely used in breast surgery. Recently, smartphone-based approaches have emerged. This investigation examines whether smartphone-based three-dimensional surface imaging provides clinically acceptable data in terms of accuracy when compared to a validated reference tool. Methods: Three-dimensional surface [...] Read more.
Background: Three-dimensional surface imaging is widely used in breast surgery. Recently, smartphone-based approaches have emerged. This investigation examines whether smartphone-based three-dimensional surface imaging provides clinically acceptable data in terms of accuracy when compared to a validated reference tool. Methods: Three-dimensional surface models were generated for 40 patients who underwent breast reconstruction surgery using the Vectra H2 (Canfield Scientific, Fairfield, NJ, USA) and the LiDAR sensor of an iPhone 15 Pro in conjunction with photogrammetry. The generated surface models were superimposed using CloudCompare’s ICP algorithm, followed by 14 linear surface-to-surface measurements to assess agreement between the three-dimensional surface models. Statistical methods included absolute error calculation, paired t-test, Bland–Altman analysis, and Intra-Class Correlation Coefficients to evaluate intra- and inter-rater reliability. Results: The average landmark-to-landmark deviation between smartphone-based and Vectra-based surface models was M = 2.997 mm (SD = 1.897 mm). No statistical differences were found in 13 of the 14 measurements for intra-rater comparison and in 12 of the 14 for inter-rater comparison. The Intra-Class Correlation Coefficient for intra-rater reliability of the iPhone was good, ranging from 0.873 to 0.993. Intra-Class Correlation Coefficient values indicated good reliability, ranging from 0.873 to 0.993 (intra-rater) and 0.845 to 0.992 (inter-rater). Bland–Altman analyses confirmed moderate to reliable agreement in 13 of 14 measurements. Conclusions: Smartphone-based three-dimensional surface imaging presents promising possibilities for breast assessment. However, it may not yet be suitable for highly detailed breast assessments requiring accuracy below the 3 mm threshold. Full article
(This article belongs to the Special Issue Current Opinion of Reconstructive and Aesthetic Breast Surgery)
Show Figures

Figure 1

13 pages, 2338 KB  
Article
High-Accuracy Deep Learning-Based Detection and Classification Model in Color-Shift Keying Optical Camera Communication Systems
by Francisca V. Vera Vera, Leonardo Muñoz, Francisco Pérez, Lisandra Bravo Alvarez, Samuel Montejo-Sánchez, Vicente Matus Icaza, Lien Rodríguez-López and Gabriel Saavedra
Sensors 2025, 25(17), 5435; https://doi.org/10.3390/s25175435 - 2 Sep 2025
Viewed by 576
Abstract
The growing number of connected devices has strained traditional radio frequency wireless networks, driving interest in alternative technologies such as optical wireless communications (OWC). Among OWC solutions, optical camera communication (OCC) stands out as a cost-effective option because it leverages existing devices equipped [...] Read more.
The growing number of connected devices has strained traditional radio frequency wireless networks, driving interest in alternative technologies such as optical wireless communications (OWC). Among OWC solutions, optical camera communication (OCC) stands out as a cost-effective option because it leverages existing devices equipped with cameras, such as smartphones and security systems, without requiring specialized hardware. This paper proposes a novel deep learning-based detection and classification model designed to optimize the receiver’s performance in an OCC system utilizing color-shift keying (CSK) modulation. The receiver was experimentally validated using an 8×8 LED matrix transmitter and a CMOS camera receiver, achieving reliable communication over distances ranging from 30 cm to 3 m under varying ambient conditions. The system employed CSK modulation to encode data into eight distinct color-based symbols transmitted at fixed frequencies. Captured image sequences of these transmissions were processed through a YOLOv8-based detection and classification framework, which achieved 98.4% accuracy in symbol recognition. This high precision minimizes transmission errors, validating the robustness of the approach in real-world environments. The results highlight OCC’s potential for low-cost applications, where high-speed data transfer and long-range are unnecessary, such as Internet of Things connectivity and vehicle-to-vehicle communication. Future work will explore adaptive modulation and coding schemes as well as the integration of more advanced deep learning architectures to improve data rates and system scalability. Full article
(This article belongs to the Special Issue Recent Advances in Optical Wireless Communications)
Show Figures

Figure 1

33 pages, 4561 KB  
Review
Smartphone-Integrated Electrochemical Devices for Contaminant Monitoring in Agriculture and Food: A Review
by Sumeyra Savas and Seyed Mohammad Taghi Gharibzahedi
Biosensors 2025, 15(9), 574; https://doi.org/10.3390/bios15090574 - 2 Sep 2025
Cited by 1 | Viewed by 1063
Abstract
Recent progress in microfluidic technologies has led to the development of compact and highly efficient electrochemical platforms, including lab-on-a-chip (LoC) systems, that integrate multiple testing functions into a single, portable device. Combined with smartphone-based electrochemical devices, these systems enable rapid and accurate on-site [...] Read more.
Recent progress in microfluidic technologies has led to the development of compact and highly efficient electrochemical platforms, including lab-on-a-chip (LoC) systems, that integrate multiple testing functions into a single, portable device. Combined with smartphone-based electrochemical devices, these systems enable rapid and accurate on-site detection of food contaminants, including pesticides, heavy metals, pathogens, and chemical additives at farms, markets, and processing facilities, significantly reducing the need for traditional laboratories. Smartphones improve the performance of these platforms by providing computational power, wireless connectivity, and high-resolution imaging, making them ideal for in-field food safety testing with minimal sample and reagent requirements. At the core of these systems are electrochemical biosensors, which convert specific biochemical reactions into electrical signals, ensuring highly sensitive and selective detection. Advanced nanomaterials and integration with Internet of Things (IoT) technologies have further improved performance, delivering cost-effective, user-friendly food monitoring solutions that meet regulatory safety and quality standards. Analytical techniques such as voltammetry, amperometry, and impedance spectroscopy increase accuracy even in complex food samples. Moreover, low-cost engineering, artificial intelligence (AI), and nanotechnology enhance the sensitivity, affordability, and data analysis capabilities of smartphone-integrated electrochemical devices, facilitating their deployment for on-site monitoring of food and agricultural contaminants. This review explains how these technologies address global food safety challenges through rapid, reliable, and portable detection, supporting food quality, sustainability, and public health. Full article
Show Figures

Figure 1

21 pages, 3192 KB  
Article
Unsupervised Structural Defect Classification via Real-Time and Noise-Robust Method in Smartphone Small Modules
by Sehun Lee, Taehoon Kim, Sookyun Kim, Junho Ahn and Namgi Kim
Electronics 2025, 14(17), 3455; https://doi.org/10.3390/electronics14173455 - 29 Aug 2025
Viewed by 501
Abstract
Demand for OIS (Optical Image Stabilization) actuator modules, developed for shake correction technologies in industries such as smartphones, drones, IoT, and AR/VR, is increasing. To enable real-time and precise inspection of these modules, an AI algorithm that maximizes defect detection accuracy is required. [...] Read more.
Demand for OIS (Optical Image Stabilization) actuator modules, developed for shake correction technologies in industries such as smartphones, drones, IoT, and AR/VR, is increasing. To enable real-time and precise inspection of these modules, an AI algorithm that maximizes defect detection accuracy is required. This study proposes an unsupervised learning-based algorithm that is robust to noise and capable of real-time processing for accurate defect classification of OIS actuators in a smart factory environment. The proposed algorithm performs noise-reduction preprocessing, considering the sensitivity of small components and lighting imbalances, and defines three dynamic Regions of Interest (ROIs) to address positional deviations. A customized AutoEncoder (AE) is trained for each ROI, and defect classification is conducted based on reconstruction errors, followed by a final comprehensive decision. Experimental results show that the algorithm achieves an accuracy of 0.9944 and an F1 score of 0.9971 using only a camera without the need for expensive sensors. Furthermore, it demonstrates an average processing time of 2.79 ms per module, ensuring real-time capability. This study contributes to precise quality inspection in smart factories by proposing a robust and scalable unsupervised inspection algorithm. Full article
(This article belongs to the Special Issue Advances in Intelligent Systems and Networks, 2nd Edition)
Show Figures

Figure 1

28 pages, 925 KB  
Article
Metaheuristic-Driven Feature Selection for Human Activity Recognition on KU-HAR Dataset Using XGBoost Classifier
by Proshenjit Sarker, Jun-Jiat Tiang and Abdullah-Al Nahid
Sensors 2025, 25(17), 5303; https://doi.org/10.3390/s25175303 - 26 Aug 2025
Cited by 1 | Viewed by 770
Abstract
Human activity recognition (HAR) is an automated technique for identifying human activities using images and sensor data. Although numerous studies exist, most of the models proposed are highly complex and rely on deep learning. This research utilized two novel frameworks based on the [...] Read more.
Human activity recognition (HAR) is an automated technique for identifying human activities using images and sensor data. Although numerous studies exist, most of the models proposed are highly complex and rely on deep learning. This research utilized two novel frameworks based on the Extreme Gradient Boosting (XGB) classifier, also known as the XGBoost classifier, enhanced with metaheuristic algorithms: Golden Jackal Optimization (GJO) and War Strategy Optimization (WARSO). This study utilized the KU-HAR dataset, which was collected from smartphone accelerometer and gyroscope sensors. We extracted 48 mathematical features to convey the HAR information. GJO-XGB achieved a mean accuracy in 10-fold cross-validation of 93.55% using only 23 out of 48 features. However, WARSO-XGB outperformed GJO-XGB and other traditional classifiers, achieving a mean accuracy, F-score, precision, and recall of 94.04%, 92.88%, 93.47%, and 92.40%, respectively. GJO-XGB has shown lower standard deviations on the test set (accuracy: 0.200; F-score: 0.285; precision: 0.388; recall: 0.336) compared to WARSO-XGB, indicating a more stable performance. WARSO-XGB exhibited lower time complexity, with average training and testing times of 30.84 s and 0.51 s, compared to 39.40 s and 0.81 s for GJO-XGB. After performing 10-fold cross-validation using various external random seeds, GJO-XGB and WARSO-XGB achieved accuracies of 93.80% and 94.19%, respectively, with a random seed = 20. SHAP identified that range_gyro_x, max_acc_z, mean_gyro_x, and some other features are the most informative features for HAR. The SHAP analysis also involved a discussion of the individual predictions, including the misclassifications. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

13 pages, 1077 KB  
Article
Feasibility and Acceptability of a Deep-Learning-Based Nipple Trauma Assessment System for Postpartum Breastfeeding Support
by Maya Nakamura, Hiroyuki Sugimori and Yasuhiko Ebina
Healthcare 2025, 13(17), 2091; https://doi.org/10.3390/healthcare13172091 - 22 Aug 2025
Viewed by 537
Abstract
Background/Objectives: Nipple trauma is a common challenge during the early postpartum period, often undermining maternal confidence and breastfeeding success. Although deep-learning-based image analysis offers the potential for objective and remote assessments, its feasibility in clinical practice has not been well examined. This [...] Read more.
Background/Objectives: Nipple trauma is a common challenge during the early postpartum period, often undermining maternal confidence and breastfeeding success. Although deep-learning-based image analysis offers the potential for objective and remote assessments, its feasibility in clinical practice has not been well examined. This study aimed to evaluate the feasibility and acceptability of a deep-learning-based nipple trauma assessment system and explore maternal perceptions of the intervention. Methods: A quasi-experimental study was conducted at a maternity hospital in Japan. Participants were assigned to intervention or control groups based on their delivery month. Mothers in the intervention group used a dedicated offline smartphone to photograph their nipples during hospitalization. Images were analyzed using a pretrained deep-learning model, and individualized feedback was delivered via a secure messaging platform. Self-administered questionnaires were collected at three points: late pregnancy, during hospitalization, and one month postpartum. Maternal experiences and satisfaction with breastfeeding were also assessed. Results: A total of 23 participants (intervention = 8 and control = 15) completed the study. The system functioned without technical errors, and no adverse events were reported. Most participants found the AI results useful, with 75% receiving high-confidence outputs (predicted class probability ≥ 60%). Participants expressed interest in real-time feedback and post-discharge use. Breastfeeding self-efficacy scores (BSES-SF) improved more in the intervention group (+9.8) than in the control group (+7.8). Conclusions: This study confirmed the feasibility and acceptability of a deep-learning-based nipple trauma assessment system during postpartum hospitalization. The system operated safely and was well received by participants. Future developments should prioritize real-time, remote functionality to support diverse maternal needs. Full article
(This article belongs to the Special Issue Women’s Health Care: State of the Art and New Challenges)
Show Figures

Figure 1

19 pages, 1949 KB  
Article
Non-Invasive Dry Eye Disease Detection Using Infrared Thermography Images: A Proof-of-Concept Study
by Laily Azyan Ramlan, Wan Mimi Diyana Wan Zaki, Marizuana Mat Daud and Haliza Abdul Mutalib
Diagnostics 2025, 15(16), 2084; https://doi.org/10.3390/diagnostics15162084 - 20 Aug 2025
Viewed by 564
Abstract
Background/Objectives: Dry Eye Disease (DED) significantly impacts quality of life due to the instability of the tear film and reduced tear production. The limited availability of eye care professionals, combined with traditional diagnostic methods that are invasive, non-portable, and time-consuming, results in delayed [...] Read more.
Background/Objectives: Dry Eye Disease (DED) significantly impacts quality of life due to the instability of the tear film and reduced tear production. The limited availability of eye care professionals, combined with traditional diagnostic methods that are invasive, non-portable, and time-consuming, results in delayed detection and hindered treatment. This proof-of-concept study aims to explore the feasibility of using smartphone-based infrared thermography (IRT) as a non-invasive, portable screening method for DED. Methods: This study included infrared thermography (IRT) images of 40 subjects (22 normal and 58 DED). Ocular surface temperature changes at three regions of interest (ROIs): nasal cornea, center cornea, and temporal cornea, were compared with Tear Film Break-up Time (TBUT) and Ocular Surface Disease Index (OSDI) scores. Statistical correlations and independent t-tests were performed, while machine learning (ML) models classified normal vs. DED eyes. Results: In these preliminary results, DED eyes exhibited a significantly faster cooling rate (p < 0.001). TBUT showed a negative correlation with OSDI (r = −0.802, p < 0.001) and positive correlations with cooling rates in the nasal cornea (r = 0.717, p < 0.001), center cornea (r = 0.764, p < 0.001), and temporal cornea (r = 0.669, p < 0.001) regions. Independent t-tests confirmed significant differences between normal and DED eyes across all parameters (p < 0.001). The Quadratic Support Vector Machine (SVM) achieved the highest accuracy among SVM models (90.54%), while the k-Nearest Neighbours (k-NN) model using Euclidean distance (k = 3) outperformed overall with 91.89% accuracy, demonstrating strong potential for DED classification. Conclusions: This study provides initial evidence supporting the use of smartphone-based infrared thermography (IRT) as a screening tool for DED. The promising classification performance highlights the potential of this approach, though further validation on larger and more diverse datasets is necessary to advance toward clinical application. Full article
(This article belongs to the Special Issue Advances in Eye Imaging)
Show Figures

Figure 1

15 pages, 2515 KB  
Article
Carbon Dot Integrated Cellulose-Based Green-Fluorescent Aerogel for Detection and Removal of Copper Ions in Water
by Guanyan Fu, Chenzhan Peng, Jiangrong Yu, Jiafeng Cao, Shilin Peng, Tian Zhao and Dong Xu
Gels 2025, 11(8), 655; https://doi.org/10.3390/gels11080655 - 18 Aug 2025
Viewed by 393
Abstract
Industrial pollution caused by Cu(II) ions remains one of the most critical environmental challenges worldwide. A novel green-fluorescent aerogel has been successfully developed for simultaneous sensing and adsorption of Cu(II) through the cross-linking of carboxymethyl nanocellulose and carbon dots (C dots) using epichlorohydrin [...] Read more.
Industrial pollution caused by Cu(II) ions remains one of the most critical environmental challenges worldwide. A novel green-fluorescent aerogel has been successfully developed for simultaneous sensing and adsorption of Cu(II) through the cross-linking of carboxymethyl nanocellulose and carbon dots (C dots) using epichlorohydrin as a linker. The C dots were synthesized by heating glucose and aspartate mixed solutions at 150 °C. Under UV illumination, the aerogel exhibited intense homogeneous green fluorescence originating from the uniformly dispersed C dots, whose emission can be efficiently quenched by Cu(II) ions. By leveraging smartphone-based imaging, the concentration of Cu(II) was quantified within the range of 5–200 µg/L, with a detection limit of 3.7 µg/L. The adsorption isotherm of Cu(II) onto the aerogel strictly conformed to the Freundlich isotherm model (fitting coefficient R2 = 0.9992), indicating a hybrid adsorption mechanism involving both physical adsorption and chemical complexation. The maximum adsorption capacity reached 149.62 mg/g, a value surpassing many reported adsorbents. X-ray photoelectron spectroscopy and Fourier-transform infrared spectroscopy analyses confirmed that the interactions between the aerogel and Cu(II) involved chelation and redox reactions, mediated by functional groups such as hydroxyl, amino, and carboxyl moieties. The straightforward fabrication process of the aerogel, coupled with its low cost, abundant raw materials, facile synthesis, and superior Cu(II) removal efficiency, positions this bifunctional fluorescent material as a promising candidate for large-scale environmental remediation applications. Full article
(This article belongs to the Section Gel Applications)
Show Figures

Figure 1

Back to TopTop