Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (45)

Search Parameters:
Keywords = single source of truth

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 11683 KB  
Article
A Generative Adversarial Network for Pixel-Scale Lunar DEM Generation from Single High-Resolution Image and Low-Resolution DEM Based on Terrain Self-Similarity Constraint
by Tianhao Chen, Yexin Wang, Jing Nan, Chenxu Zhao, Biao Wang, Bin Xie, Wai-Chung Liu, Kaichang Di, Bin Liu and Shaohua Chen
Remote Sens. 2025, 17(17), 3097; https://doi.org/10.3390/rs17173097 - 5 Sep 2025
Abstract
Lunar digital elevation models (DEMs) are a fundamental data source for lunar research and exploration. However, high-resolution DEM products for the Moon are only available in some local areas, which makes it difficult to meet the needs of scientific research and missions. To [...] Read more.
Lunar digital elevation models (DEMs) are a fundamental data source for lunar research and exploration. However, high-resolution DEM products for the Moon are only available in some local areas, which makes it difficult to meet the needs of scientific research and missions. To this end, we have previously developed a deep learning-based method (LDEMGAN1.0) for single-image lunar DEM reconstruction. To address issues such as loss of detail in LDEMGAN1.0, this study leverages the inherent structural self-similarity of different DEM data from the same lunar terrain and proposes an improved version, named LDEMGAN2.0. During the training process, the model computes the self-similarity graph (SSG) between the outputs of the LDEMGAN2.0 generator and the ground truth, and incorporates the self-similarity loss (SSL) constraint into the network generator loss to guide DEM reconstruction. This improves the network’s capacity to capture both local and global terrain structures. Using the LROC NAC DTM product (2 m/pixel) as the ground truth, experiments were conducted in the Apollo 11 landing area. The proposed LDEMGAN2.0 achieved mean absolute error (MAE) of 1.49 m, root mean square error (RMSE) of 2.01 m, and structural similarity index measure (SSIM) of 0.86, which is 46.0%, 33.4%, and 11.6% higher than that of LDEMGAN1.0. Both qualitative and quantitative evaluations demonstrate that LDEMGAN2.0 enhances detail recovery and reduces reconstruction artifacts. Full article
(This article belongs to the Special Issue Planetary Geologic Mapping and Remote Sensing (Second Edition))
Show Figures

Figure 1

21 pages, 806 KB  
Tutorial
Multi-Layered Framework for LLM Hallucination Mitigation in High-Stakes Applications: A Tutorial
by Sachin Hiriyanna and Wenbing Zhao
Computers 2025, 14(8), 332; https://doi.org/10.3390/computers14080332 - 16 Aug 2025
Viewed by 1134
Abstract
Large language models (LLMs) now match or exceed human performance on many open-ended language tasks, yet they continue to produce fluent but incorrect statements, which is a failure mode widely referred to as hallucination. In low-stakes settings this may be tolerable; in regulated [...] Read more.
Large language models (LLMs) now match or exceed human performance on many open-ended language tasks, yet they continue to produce fluent but incorrect statements, which is a failure mode widely referred to as hallucination. In low-stakes settings this may be tolerable; in regulated or safety-critical domains such as financial services, compliance review, and client decision support, it is not. Motivated by these realities, we develop an integrated mitigation framework that layers complementary controls rather than relying on any single technique. The framework combines structured prompt design, retrieval-augmented generation (RAG) with verifiable evidence sources, and targeted fine-tuning aligned with domain truth constraints. Our interest in this problem is practical. Individual mitigation techniques have matured quickly, yet teams deploying LLMs in production routinely report difficulty stitching them together in a coherent, maintainable pipeline. Decisions about when to ground a response in retrieved data, when to escalate uncertainty, how to capture provenance, and how to evaluate fidelity are often made ad hoc. Drawing on experience from financial technology implementations, where even rare hallucinations can carry material cost, regulatory exposure, or loss of customer trust, we aim to provide clearer guidance in the form of an easy-to-follow tutorial. This paper makes four contributions. First, we introduce a three-layer reference architecture that organizes mitigation activities across input governance, evidence-grounded generation, and post-response verification. Second, we describe a lightweight supervisory agent that manages uncertainty signals and triggers escalation (to humans, alternate models, or constrained workflows) when confidence falls below policy thresholds. Third, we analyze common but under-addressed security surfaces relevant to hallucination mitigation, including prompt injection, retrieval poisoning, and policy evasion attacks. Finally, we outline an implementation playbook for production deployment, including evaluation metrics, operational trade-offs, and lessons learned from early financial-services pilots. Full article
Show Figures

Figure 1

26 pages, 7645 KB  
Article
Prediction of Rice Chlorophyll Index (CHI) Using Nighttime Multi-Source Spectral Data
by Cong Liu, Lin Wang, Xuetong Fu, Junzhe Zhang, Ran Wang, Xiaofeng Wang, Nan Chai, Longfeng Guan, Qingshan Chen and Zhongchen Zhang
Agriculture 2025, 15(13), 1425; https://doi.org/10.3390/agriculture15131425 - 1 Jul 2025
Viewed by 547
Abstract
The chlorophyll index (CHI) is a crucial indicator for assessing the photosynthetic capacity and nutritional status of crops. However, traditional methods for measuring CHI, such as chemical extraction and handheld instruments, fall short in meeting the requirements for efficient, non-destructive, and continuous monitoring [...] Read more.
The chlorophyll index (CHI) is a crucial indicator for assessing the photosynthetic capacity and nutritional status of crops. However, traditional methods for measuring CHI, such as chemical extraction and handheld instruments, fall short in meeting the requirements for efficient, non-destructive, and continuous monitoring at the canopy level. This study aimed to explore the feasibility of predicting rice canopy CHI using nighttime multi-source spectral data combined with machine learning models. In this study, ground truth CHI values were obtained using a SPAD-502 chlorophyll meter. Canopy spectral data were acquired under nighttime conditions using a high-throughput phenotyping platform (HTTP) equipped with active light sources in a greenhouse environment. Three types of sensors—multispectral (MS), visible light (RGB), and chlorophyll fluorescence (ChlF)—were employed to collect data across different growth stages of rice, ranging from tillering to maturity. PCA and LASSO regression were applied for dimensionality reduction and feature selection of multi-source spectral variables. Subsequently, CHI prediction models were developed using four machine learning algorithms: support vector regression (SVR), random forest (RF), back-propagation neural network (BPNN), and k-nearest neighbors (KNNs). The predictive performance of individual sensors (MS, RGB, and ChlF) and sensor fusion strategies was evaluated across multiple growth stages. The results demonstrated that sensor fusion models consistently outperformed single-sensor approaches. Notably, during tillering (TI), maturity (MT), and the full growth period (GP), fused models achieved high accuracy (R2 > 0.90, RMSE < 2.0). The fusion strategy also showed substantial advantages over single-sensor models during the jointing–heading (JH) and grain-filling (GF) stages. Among the individual sensor types, MS data achieved relatively high accuracy at certain stages, while models based on RGB and ChlF features exhibited weaker performance and lower prediction stability. Overall, the highest prediction accuracy was achieved during the full growth period (GP) using fused spectral data, with an R2 of 0.96 and an RMSE of 1.99. This study provides a valuable reference for developing CHI prediction models based on nighttime multi-source spectral data. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 5702 KB  
Article
YOLOv9-GDV: A Power Pylon Detection Model for Remote Sensing Images
by Ke Zhang, Ningxuan Zhang, Chaojun Shi, Qiaochu Lu, Xian Zheng, Yujie Cao, Xiaoyun Zhang and Jiyuan Yang
Remote Sens. 2025, 17(13), 2229; https://doi.org/10.3390/rs17132229 - 29 Jun 2025
Viewed by 563
Abstract
Under the background of continuous breakthroughs in the spatial resolution of satellite remote sensing technology, high-resolution remote sensing images have become a frontier data source for intelligent inspection research of power infrastructure. To address existing issues in remote sensing image application algorithms such [...] Read more.
Under the background of continuous breakthroughs in the spatial resolution of satellite remote sensing technology, high-resolution remote sensing images have become a frontier data source for intelligent inspection research of power infrastructure. To address existing issues in remote sensing image application algorithms such as difficulties in power target feature extraction, low detection accuracy, and false positives/missed detections, this paper proposes the YOLOv9-GDV power tower detection algorithm specifically for power tower detection in high-resolution satellite remote sensing images. Firstly, under high-resolution imaging conditions where transmission tower features are prominent, a Global Pyramid Attention (GPA) mechanism is proposed. This mechanism enhances global representation capabilities, enabling the model to better understand object–background relationships and effectively integrate multi-scale spatial information, thereby improving detection accuracy and robustness. Secondly, a Diverse Branch Block (DBB) is embedded in the feature extraction–fusion module, which enriches the feature space by enhancing the representation capability of single-convolution operations, thereby improving model feature extraction performance without increasing inference time costs. Finally, the Variable Minimum Point Distance Intersection over Union (VMPDIoU) loss is proposed to optimize the model’s loss function. This method employs variable input parameters to directly calculate key point distances between predicted and ground-truth boxes, more accurately reflecting positional differences between detection results and reference targets, thus effectively improving the model’s mean Average Precision (mAP). On the Satellite Remote Sensing Power Tower Dataset (SRSPTD), the YOLOv9-GDV algorithm achieves an mAP of 80.2%, representing a 4.7% improvement over the baseline algorithm. On the multi-scene high-resolution power transmission tower dataset (GFTD), the algorithm obtains an mAP of 94.6%, showing a 2.3% improvement over the original model. The significant mAP improvements on both datasets validate the effectiveness and feasibility of the proposed method. Full article
Show Figures

Figure 1

22 pages, 92602 KB  
Article
Source-Free Model Transferability Assessment for Smart Surveillance via Randomly Initialized Networks
by Wei-Cheng Wang, Sam Leroux and Pieter Simoens
Sensors 2025, 25(13), 3856; https://doi.org/10.3390/s25133856 - 20 Jun 2025
Viewed by 418
Abstract
Smart surveillance cameras are increasingly employed for automated tasks such as event and anomaly detection within smart city infrastructures. However, the heterogeneity of deployment environments, ranging from densely populated urban intersections to quiet residential neighborhoods, renders the use of a single, universal model [...] Read more.
Smart surveillance cameras are increasingly employed for automated tasks such as event and anomaly detection within smart city infrastructures. However, the heterogeneity of deployment environments, ranging from densely populated urban intersections to quiet residential neighborhoods, renders the use of a single, universal model suboptimal. To address this, we propose the construction of a model zoo comprising models trained for diverse environmental contexts. We introduce an automated transferability assessment framework that identifies the most suitable model for a new deployment site. This task is particularly challenging in smart surveillance settings, where both source data and labeled target data are typically unavailable. Existing approaches often depend on pretrained embeddings or assumptions about model uncertainty, which may not hold reliably in real-world scenarios. In contrast, our method leverages embeddings generated by randomly initialized neural networks (RINNs) to construct task-agnostic reference embeddings without relying on pretraining. By comparing feature representations of the target data extracted using both pretrained models and RINNs, this method eliminates the need for labeled data. Structural similarity between embeddings is quantified using minibatch-Centered Kernel Alignment (CKA), enabling efficient and scalable model ranking. We evaluate our method on realistic surveillance datasets across multiple downstream tasks, including object tagging, anomaly detection, and event classification. Our embedding-level score achieves high correlations with ground-truth model rankings (relative to fine-tuned baselines), attaining Kendall’s τ values of 0.95, 0.94, and 0.89 on these tasks, respectively. These results demonstrate that our framework consistently selects the most transferable model, even when the specific downstream task or objective is unknown. This confirms the practicality of our approach as a robust, low-cost precursor to model adaptation or retraining. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

22 pages, 5224 KB  
Article
A Common Data Environment Framework Applied to Structural Life Cycle Assessment: Coordinating Multiple Sources of Information
by Lini Xiang, Gang Li and Haijiang Li
Buildings 2025, 15(8), 1315; https://doi.org/10.3390/buildings15081315 - 16 Apr 2025
Viewed by 1074
Abstract
In Building Information Modeling (BIM)-driven collaboration, the workflow for information management utilizes a Common Data Environment (CDE). The core idea of a CDE is to serve as a single source of truth, enabling efficient coordination among diverse stakeholders. Nevertheless, investigations into employing CDEs [...] Read more.
In Building Information Modeling (BIM)-driven collaboration, the workflow for information management utilizes a Common Data Environment (CDE). The core idea of a CDE is to serve as a single source of truth, enabling efficient coordination among diverse stakeholders. Nevertheless, investigations into employing CDEs to manage projects reveal that procuring commercial CDE solutions is too expensive and functionally redundant for small and medium-sized enterprises (SMEs) and small research organizations, and there is a lack of experience in using CDE tools. Consequently, this study aimed to provide a cheap and lightweight alternative. It proposes a three-layered CDE framework: decentralized databases enabling work in distinct software environments; resource description framework (RDF)-based metadata facilitating seamless data communication; and microservices enabling data collection and reorganization via standardized APIs and query languages. We also apply the CDE framework to structural life cycle assessment (LCA). The results show that a lightweight CDE solution is achievable using tools like the bcfOWL ontology, RESTful APIs, and ASP.NET 6 Clean architecture. This paper offers a scalable framework that reduces infrastructure complexity while allowing users the freedom to integrate diverse tools and APIs for customized information management workflows. This paper’s CDE architecture surpasses traditional commercial software in terms of its flexibility and scalability, facilitating broader CDE applications in the construction industry. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

24 pages, 8439 KB  
Article
Triple Collocation-Based Uncertainty Analysis and Data Fusion of Multi-Source Evapotranspiration Data Across China
by Dayang Wang, Shaobo Liu and Dagang Wang
Atmosphere 2024, 15(12), 1410; https://doi.org/10.3390/atmos15121410 - 24 Nov 2024
Viewed by 1037
Abstract
Accurate estimation of evapotranspiration (ET) is critical for understanding land-atmospheric interactions. Despite the advancement in ET measurement, a single ET estimate still suffers from inherent uncertainties. Data fusion provides a viable option for improving ET estimation by leveraging the strengths of individual ET [...] Read more.
Accurate estimation of evapotranspiration (ET) is critical for understanding land-atmospheric interactions. Despite the advancement in ET measurement, a single ET estimate still suffers from inherent uncertainties. Data fusion provides a viable option for improving ET estimation by leveraging the strengths of individual ET products, especially the triple collocation (TC) method, which has a prominent advantage in not relying on the availability of “ground truth” data. In this work, we proposed a framework for uncertainty analysis and data fusion based on the extended TC (ETC) and multiple TC (MTC) variants. Three different sources of ET products, i.e., the Global Land Evaporation and Amsterdam Model (GLEAM), the fifth generation of European Reanalysis-Land (ERA5-Land), and the complementary relationship model (CR), were selected as the TC triplet. The analyses were conducted based on different climate zones and land cover types across China. Results show that ETC presents outstanding performance as most areas conform to the zero-error correlations assumption, while nearly half of the areas violate this assumption when using MTC. In addition, the ETC method derives a lower root mean square error (RMSE) and higher correlation coefficient (Corr) than the MTC one over most climate zones and land cover types. Among the ET products, GLEAM performs the best, while CR performs the worst. The merged ET estimates from both ETC and MTC methods are generally superior to the original triplets at the site scale. The findings indicate that the TC-based method could be a reliable tool for uncertainty analysis and data fusion. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

24 pages, 12109 KB  
Article
Case Study of an Integrated Design and Technical Concept for a Scalable Hyperloop System
by Domenik Radeck, Florian Janke, Federico Gatta, João Nicolau, Gabriele Semino, Tim Hofmann, Nils König, Oliver Kleikemper, Felix He-Mao Hsu, Sebastian Rink, Felix Achenbach and Agnes Jocher
Appl. Syst. Innov. 2024, 7(6), 113; https://doi.org/10.3390/asi7060113 - 11 Nov 2024
Cited by 1 | Viewed by 3471
Abstract
This paper presents the design process and resulting technical concept for an integrated hyperloop system, aimed at realizing efficient high-speed ground transportation. This study integrates various functions into a coherent and technically feasible solution, with key design decisions that optimize performance and cost-efficiency. [...] Read more.
This paper presents the design process and resulting technical concept for an integrated hyperloop system, aimed at realizing efficient high-speed ground transportation. This study integrates various functions into a coherent and technically feasible solution, with key design decisions that optimize performance and cost-efficiency. An iterative design process with domain-specific experts, regular reviews, and a dataset with a single source of truth were employed to ensure continuous and collective progress. The proposed hyperloop system features a maximum speed of 600 kmh and a capacity of 21 passengers per pod (vehicle). It employs air docks for efficient boarding, electromagnetic suspension (EMS) combined with electrodynamic suspension (EDS) for high-speed lane switching, and short stator motor technology for propulsion. Cooling is managed through water evaporation at an operating pressure of 10 mbar, while a 300 kW inductive power supply (IPS) provides onboard power. The design includes a safety system that avoids emergency exits along the track and utilizes separated safety-critical and high-bandwidth communication. With prefabricated concrete parts used for the tube, construction costs can be reduced and scalability improved. A dimensioned cross-sectional drawing, as well as a preliminary pod mass budget and station layout, are provided, highlighting critical technical systems and their interactions. Calculations of energy consumption per passenger kilometer, accounting for all functions, demonstrate a distinct advantage over existing modes of transportation, achieving greater efficiency even at high speeds and with smaller vehicle sizes. This work demonstrates the potential of a well-integrated hyperloop system to significantly enhance transportation efficiency and sustainability, positioning it as a promising extension to existing modes of travel. The findings offer a solid framework for future hyperloop development, encouraging further research, standardization efforts, and public dissemination for continued advancements. Full article
(This article belongs to the Section Control and Systems Engineering)
Show Figures

Figure 1

21 pages, 9876 KB  
Article
Estimation of Leaf Area Index across Biomes and Growth Stages Combining Multiple Vegetation Indices
by Fangyi Lv, Kaimin Sun, Wenzhuo Li, Shunxia Miao and Xiuqing Hu
Sensors 2024, 24(18), 6106; https://doi.org/10.3390/s24186106 - 21 Sep 2024
Cited by 2 | Viewed by 2307
Abstract
The leaf area index (LAI) is a key indicator of vegetation canopy structure and growth status, crucial for global ecological environment research. The Moderate Resolution Spectral Imager-II (MERSI-II) aboard Fengyun-3D (FY-3D) covers the globe twice daily, providing a reliable data source for large-scale [...] Read more.
The leaf area index (LAI) is a key indicator of vegetation canopy structure and growth status, crucial for global ecological environment research. The Moderate Resolution Spectral Imager-II (MERSI-II) aboard Fengyun-3D (FY-3D) covers the globe twice daily, providing a reliable data source for large-scale and high-frequency LAI estimation. VI-based LAI estimation is effective, but species and growth status impacts on the sensitivity of the VI–LAI relationship are rarely considered, especially for MERSI-II. This study analyzed the VI–LAI relationship for eight biomes in China with contrasting leaf structures and canopy architectures. The LAI was estimated by adaptively combining multiple VIs and validated using MODIS, GLASS, and ground measurements. Results show that (1) species and growth stages significantly affect VI–LAI sensitivity. For example, the EVI is optimal for broadleaf crops in winter, while the RDVI is best for evergreen needleleaf forests in summer. (2) Combining vegetation indices can significantly optimize sensitivity. The accuracy of multi-VI-based LAI retrieval is notably higher than using a single VI for the entire year. (3) MERSI-II shows good spatial–temporal consistency with MODIS and GLASS and is more sensitive to vegetation growth fluctuation. Direct validation with ground-truth data also demonstrates that the uncertainty of retrievals is acceptable (R2 = 0.808, RMSE = 0.642). Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 2006 KB  
Article
Multi-Source Information Graph Embedding with Ensemble Learning for Link Prediction
by Chunning Hou, Xinzhi Wang, Xiangfeng Luo and Shaorong Xie
Electronics 2024, 13(14), 2762; https://doi.org/10.3390/electronics13142762 - 13 Jul 2024
Viewed by 1442
Abstract
Link prediction is a key technique for connecting entities and relationships in a graph reasoning field. It leverages known information about the graph structure data to predict missing factual information. Previous studies have either focused on the semantic representation of a single triplet [...] Read more.
Link prediction is a key technique for connecting entities and relationships in a graph reasoning field. It leverages known information about the graph structure data to predict missing factual information. Previous studies have either focused on the semantic representation of a single triplet or on the graph structure data built on triples. The former ignores the association between different triples, and the latter ignores the true meaning of the node itself. Furthermore, common graph-structured datasets inherently face challenges, such as missing information and incompleteness. In light of this challenge, we present a novel model called Multi-source Information Graph Embedding with Ensemble Learning for Link Prediction (EMGE), which can effectively improve the reasoning of link prediction. Ensemble learning is systematically applied throughout the model training process. At the data level, this approach enhances entity embeddings by integrating structured graph information and unstructured textual data as multi-source information inputs. The fusion of these inputs is effectively addressed by introducing an attention mechanism. During the training phase, the principle of ensemble learning is employed to extract semantic features from multiple neural network models, facilitating the interaction of enriched information. To ensure effective model learning, a novel loss function based on contrastive learning is devised, effectively minimizing the discrepancy between predicted values and the ground truth. Moreover, to enhance the semantic representation of graph nodes in link prediction, two rules are introduced during the aggregation of graph structure information. These rules incorporate the concept of spreading activation, enabling a more comprehensive understanding of the relationships between nodes and edges in the graph. During the testing phase, the EMGE model is validated on three datasets, including WN18RR, FB15k-237, and a private Chinese financial dataset. The experimental results demonstrate a reduction in the mean rank (MR) by 0.2 times, an improvement in the mean reciprocal rank (MRR) by 5.9%, and an increase in the Hit@1 by 12.9% compared to the baseline model. Full article
Show Figures

Figure 1

22 pages, 5458 KB  
Article
Hammerstein–Wiener Motion Artifact Correction for Functional Near-Infrared Spectroscopy: A Novel Inertial Measurement Unit-Based Technique
by Hayder R. Al-Omairi, Arkan AL-Zubaidi, Sebastian Fudickar, Andreas Hein and Jochem W. Rieger
Sensors 2024, 24(10), 3173; https://doi.org/10.3390/s24103173 - 16 May 2024
Cited by 1 | Viewed by 2562
Abstract
Participant movement is a major source of artifacts in functional near-infrared spectroscopy (fNIRS) experiments. Mitigating the impact of motion artifacts (MAs) is crucial to estimate brain activity robustly. Here, we suggest and evaluate a novel application of the nonlinear Hammerstein–Wiener model to estimate [...] Read more.
Participant movement is a major source of artifacts in functional near-infrared spectroscopy (fNIRS) experiments. Mitigating the impact of motion artifacts (MAs) is crucial to estimate brain activity robustly. Here, we suggest and evaluate a novel application of the nonlinear Hammerstein–Wiener model to estimate and mitigate MAs in fNIRS signals from direct-movement recordings through IMU sensors mounted on the participant’s head (head-IMU) and the fNIRS probe (probe-IMU). To this end, we analyzed the hemodynamic responses of single-channel oxyhemoglobin (HbO) and deoxyhemoglobin (HbR) signals from 17 participants who performed a hand tapping task with different levels of concurrent head movement. Additionally, the tapping task was performed without head movements to estimate the ground-truth brain activation. We compared the performance of our novel approach with the probe-IMU and head-IMU to eight established methods (PCA, tPCA, spline, spline Savitzky–Golay, wavelet, CBSI, RLOESS, and WCBSI) on four quality metrics: SNR, △AUC, RMSE, and R. Our proposed nonlinear Hammerstein–Wiener method achieved the best SNR increase (p < 0.001) among all methods. Visual inspection revealed that our approach mitigated MA contaminations that other techniques could not remove effectively. MA correction quality was comparable with head- and probe-IMUs. Full article
(This article belongs to the Special Issue EEG and fNIRS-Based Sensors)
Show Figures

Figure 1

20 pages, 10124 KB  
Article
Satellite Hyperspectral Nighttime Light Observation and Identification with DESIS
by Robert E. Ryan, Mary Pagnutti, Hannah Ryan, Kara Burch and Kimberly Manriquez
Remote Sens. 2024, 16(5), 923; https://doi.org/10.3390/rs16050923 - 6 Mar 2024
Cited by 5 | Viewed by 3375
Abstract
The satellite imagery of nighttime lights (NTLs) has been studied to understand human activities, economic development, and more recently, the ecological impact of brighter night skies. The Visible Infrared Imaging Radiometer Suite (VIIRS) Day–Night Band (DNB) offers perhaps the most advanced nighttime imaging [...] Read more.
The satellite imagery of nighttime lights (NTLs) has been studied to understand human activities, economic development, and more recently, the ecological impact of brighter night skies. The Visible Infrared Imaging Radiometer Suite (VIIRS) Day–Night Band (DNB) offers perhaps the most advanced nighttime imaging capabilities to date, but its large pixel size and single band capture large-scale changes in NTL while missing granular but important details, such as lighting type and brightness. To better understand individual NTL sources in a region, the spectra of nighttime lights captured by the DLR Earth Sensing Imaging Spectrometer (DESIS) were extracted and compared against near-coincident VIIRS DNB imagery. The analysis shows that DESIS’s finer spatial and spectral resolutions can detect individual NTL locations and types beyond what is possible with the DNB. Extracted night light spectra, validated against ground truth measurements, demonstrate DESIS’s ability to accurately detect and identify narrow-band atomic emission lines that characterize the spectra of high-intensity discharge (HID) light sources and the broader spectral features associated with different light-emitting diode (LED) lights. These results suggest the possible application of using hyperspectral data from moderate-resolution sensors to identify lamp construction details, such as illumination source type and light quality in low-light contexts. NTL data from DESIS and other hyperspectral sensors may improve the scientific understanding of light pollution, lighting quality, and energy efficiency by identifying, evaluating, and mapping individual and small groups of light sources. Full article
(This article belongs to the Topic Advances in Earth Observation and Geosciences)
Show Figures

Graphical abstract

18 pages, 695 KB  
Article
A Neighborhood-Similarity-Based Imputation Algorithm for Healthcare Data Sets: A Comparative Study
by Colin Wilcox, Vasileios Giagos and Soufiene Djahel
Electronics 2023, 12(23), 4809; https://doi.org/10.3390/electronics12234809 - 28 Nov 2023
Cited by 1 | Viewed by 1573
Abstract
The increasing computerisation of medical services has highlighted inconsistencies in the way in which patients’ historic medical data were recorded. Differences in process and practice between medical services and facilities have led to many incomplete and inaccurate medical histories being recorded. To create [...] Read more.
The increasing computerisation of medical services has highlighted inconsistencies in the way in which patients’ historic medical data were recorded. Differences in process and practice between medical services and facilities have led to many incomplete and inaccurate medical histories being recorded. To create a single point of truth going forward, it is necessary to correct these inconsistencies. A common way to do this has been to use imputation techniques to predict missing data values based on the known values in the data set. In this paper, we propose a neighborhood similarity measure-based imputation technique and analyze its achieved prediction accuracy in comparison with a number of traditional imputation methods using both an incomplete anonymized diabetes medical data set and a number of simulated data sets as the sources of our data. The aim is to determine whether any improvement could be made in the accuracy of predicting a diabetes diagnosis using the known outcomes of the diabetes patients’ data set. The obtained results have proven the effectiveness of our proposed approach compared to other state-of-the-art single-pass imputation techniques. Full article
(This article belongs to the Special Issue Advances in Intelligent Data Analysis and Its Applications)
Show Figures

Figure 1

19 pages, 27203 KB  
Article
Domain-Aware Few-Shot Learning for Optical Coherence Tomography Noise Reduction
by Deborah Pereg
J. Imaging 2023, 9(11), 237; https://doi.org/10.3390/jimaging9110237 - 30 Oct 2023
Cited by 3 | Viewed by 2046
Abstract
Speckle noise has long been an extensively studied problem in medical imaging. In recent years, there have been significant advances in leveraging deep learning methods for noise reduction. Nevertheless, adaptation of supervised learning models to unseen domains remains a challenging problem. Specifically, deep [...] Read more.
Speckle noise has long been an extensively studied problem in medical imaging. In recent years, there have been significant advances in leveraging deep learning methods for noise reduction. Nevertheless, adaptation of supervised learning models to unseen domains remains a challenging problem. Specifically, deep neural networks (DNNs) trained for computational imaging tasks are vulnerable to changes in the acquisition system’s physical parameters, such as: sampling space, resolution, and contrast. Even within the same acquisition system, performance degrades across datasets of different biological tissues. In this work, we propose a few-shot supervised learning framework for optical coherence tomography (OCT) noise reduction, that offers high-speed training (of the order of seconds) and requires only a single image, or part of an image, and a corresponding speckle-suppressed ground truth, for training. Furthermore, we formulate the domain shift problem for OCT diverse imaging systems and prove that the output resolution of a despeckling trained model is determined by the source domain resolution. We also provide possible remedies. We propose different practical implementations of our approach, verify and compare their applicability, robustness, and computational efficiency. Our results demonstrate the potential to improve sample complexity, generalization, and time efficiency, for coherent and non-coherent noise reduction via supervised learning models, that can also be leveraged for other real-time computer vision applications. Full article
Show Figures

Figure 1

19 pages, 3396 KB  
Article
Early Crop Mapping Using Dynamic Ecoregion Clustering: A USA-Wide Study
by Yiqun Wang, Hui Huang and Radu State
Remote Sens. 2023, 15(20), 4962; https://doi.org/10.3390/rs15204962 - 14 Oct 2023
Cited by 5 | Viewed by 2449
Abstract
Mapping target crops earlier than the harvest period is an essential task for improving agricultural productivity and decision-making. This paper presents a new method for early crop mapping for the entire conterminous USA (CONUS) land area using the Normalized Difference Vegetation Index (NDVI) [...] Read more.
Mapping target crops earlier than the harvest period is an essential task for improving agricultural productivity and decision-making. This paper presents a new method for early crop mapping for the entire conterminous USA (CONUS) land area using the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) data with a dynamic ecoregion clustering approach. Ecoregions, geographically distinct areas with unique ecological patterns and processes, provide a valuable framework for large-scale crop mapping. We conducted our dynamic ecoregion clustering by analyzing soil, climate, elevation, and slope data. This analysis facilitated the division of the cropland area within the CONUS into distinct ecoregions. Unlike static ecoregion clustering, which generates a single ecoregion map that remains unchanged over time, our dynamic ecoregion approach produces a unique ecoregion map for each year. This dynamic approach enables us to consider the year-to-year climate variations that significantly impact crop growth, enhancing the accuracy of our crop mapping process. Subsequently, a Random Forest classifier was employed to train individual models for each ecoregion. These models were trained using the time-series MODIS (Moderate Resolution Imaging Spectroradiometer) 250-m NDVI and EVI data retrieved from Google Earth Engine, covering the crop growth periods spanning from 2013 to 2017, and evaluated from 2018 to 2022. Ground truth data were sourced from the US Department of Agriculture’s (USDA) Cropland Data Layer (CDL) products. The evaluation results showed that the dynamic clustering method achieved higher accuracy than the static clustering method in early crop mapping in the entire CONUS. This study’s findings can be helpful for improving crop management and decision-making for agricultural activities by providing early and accurate crop mapping. Full article
Show Figures

Figure 1

Back to TopTop