sensors-logo

Journal Browser

Journal Browser

Advancing Land Monitoring through Synergistic Harmonization of Optical, Radar and Lidar Satellite Technologies

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Environmental Sensing".

Deadline for manuscript submissions: 31 January 2025 | Viewed by 4618

Special Issue Editor


E-Mail Website
Guest Editor
Department of Informatics, Tokyo University of Information Sciences, 4-1 Onaridai, Wakaba-ku, Chiba 265-8501, Japan
Interests: remote sensing; machine learning; ecology; plant communities
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Land monitoring, the systematic observation, measurement, and analysis of the Earth's terrestrial surface's biophysical characteristics, is gaining increased interest, driven by the growing need for sustainable management of Earth's resources, advancements in satellite and sensor technologies, and an increased focus on addressing climate change and environmental degradation. The process involves monitoring various environmental factors such as land cover, urban expansion, vegetation, and agricultural practices, to improve climate and disaster response, as well as sustainable environmental management.

The integration of optical, radar, and lidar satellite observations is expected to produce more accurate and consistent land monitoring solutions. However, identifying current research gaps in combining optical, radar, and lidar satellite observations for land monitoring requires examining the limitations and challenges associated with these technologies. One of the primary challenges is developing effective algorithms for integrating the structural information provided by radar and lidar sensors with the spectral information of optical sensors. These datasets are fundamentally different in nature, leading to complexities in data fusion, and research is needed to create advanced algorithms and data fusion techniques that can provide a more comprehensive view of the Earth’s surface. Another challenge is ensuring continuity and consistency in data collection over time. Research is required to develop systems that can integrate data from multiple sources taken at different times while maintaining harmonization temporally. Despite deep learning's role in fusing data from various sensors, creating models that accurately interpret complex data is challenging, and improving radar and lidar resolution without losing quality is necessary.

Contributions in the form of original articles, letters, reviews, and perspectives are invited from researchers and practitioners working on developing algorithms, improving existing techniques, and applying these methods to diverse geographical regions and ecological settings, offering unprecedented insights into our changing world. We thank you in advance for your contributions to this Special Issue.

Dr. Ram C. Sharma
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • land monitoring
  • vegetation
  • disaster response
  • urban
  • agriculture
  • biomass and carbon stocks
  • data fusion
  • optical, radar, lidar observations
  • algorithm development
  • deep learning
  • ecological applications
  • multi-spectral
  • hyper-spectral
  • SAR
  • landsat
  • sentinel
  • worldview
  • GEDI

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 5503 KiB  
Article
Mangrove Species Classification from Unmanned Aerial Vehicle Hyperspectral Images Using Object-Oriented Methods Based on Feature Combination and Optimization
by Fankai Ye and Baoping Zhou
Sensors 2024, 24(13), 4108; https://doi.org/10.3390/s24134108 - 24 Jun 2024
Viewed by 444
Abstract
Accurate and timely acquisition of the spatial distribution of mangrove species is essential for conserving ecological diversity. Hyperspectral imaging sensors are recognized as effective tools for monitoring mangroves. However, the spatial complexity of mangrove forests and the spectral redundancy of hyperspectral images pose [...] Read more.
Accurate and timely acquisition of the spatial distribution of mangrove species is essential for conserving ecological diversity. Hyperspectral imaging sensors are recognized as effective tools for monitoring mangroves. However, the spatial complexity of mangrove forests and the spectral redundancy of hyperspectral images pose challenges to fine classification. Moreover, finely classifying mangrove species using only spectral information is difficult due to spectral similarities among species. To address these issues, this study proposes an object-oriented multi-feature combination method for fine classification. Specifically, hyperspectral images were segmented using multi-scale segmentation techniques to obtain different species of objects. Then, a variety of features were extracted, including spectral, vegetation indices, fractional order differential, texture, and geometric features, and a genetic algorithm was used for feature selection. Additionally, ten feature combination schemes were designed to compare the effects on mangrove species classification. In terms of classification algorithms, the classification capabilities of four machine learning classifiers were evaluated, including K-nearest neighbor (KNN), support vector machines (SVM), random forests (RF), and artificial neural networks (ANN) methods. The results indicate that SVM based on texture features achieved the highest classification accuracy among single-feature variables, with an overall accuracy of 97.04%. Among feature combination variables, ANN based on raw spectra, first-order differential spectra, texture features, vegetation indices, and geometric features achieved the highest classification accuracy, with an overall accuracy of 98.03%. Texture features and fractional order differentiation are identified as important variables, while vegetation index and geometric features can further improve classification accuracy. Object-based classification, compared to pixel-based classification, can avoid the salt-and-pepper phenomenon and significantly enhance the accuracy and efficiency of mangrove species classification. Overall, the multi-feature combination method and object-based classification strategy proposed in this study provide strong technical support for the fine classification of mangrove species and are expected to play an important role in mangrove restoration and management. Full article
Show Figures

Figure 1

23 pages, 14122 KiB  
Article
Subsidence Characteristics in North Anhui Coal Mining Areas Using Space–Air–Ground Collaborative Observations
by Li’ao Quan, Shuanggen Jin, Jianxin Zhang, Junyun Chen and Junjun He
Sensors 2024, 24(12), 3869; https://doi.org/10.3390/s24123869 - 14 Jun 2024
Viewed by 339
Abstract
To fully comprehend the patterns of land and ecological damage caused by coal mining subsidence, and to scientifically carry out ecological mine restoration and management, it is urgent to accurately grasp the information of coal mining, particularly in complex coaling areas, such as [...] Read more.
To fully comprehend the patterns of land and ecological damage caused by coal mining subsidence, and to scientifically carry out ecological mine restoration and management, it is urgent to accurately grasp the information of coal mining, particularly in complex coaling areas, such as North Anhui, China. In this paper, a space–air–ground collaborative monitoring system was constructed for coal mining areas based on multi-source remote sensing data and subsidence characteristics of coaling areas were investigated in North Anhui. It was found that from 2019 to 2022, 16 new coal mining subsidence areas were found in northern Anhui, with the total area increasing by 8.1%. In terms of land use, water areas were increased by 101.9 km2 from 2012 to 2022, cultivated land was decreased by 99.3 km2, and residence land was decreased by 11.8 km2. The depth of land subsidence in the subsidence areas is divided into 307.9 km2 of light subsidence areas with a subsidence depth of less than 500 mm; 161.8 km2 of medium subsidence areas with a subsidence depth between 500 mm and 1500 mm; and 281.2 km2 of heavy subsidence areas with a subsidence depth greater than 1500 mm. The total subsidence governance area is 191.2 km2, accounting for 26.5% of the total subsidence area. From the perspective of prefecture-level cities, the governance rate reaches 51.3% in Huaibei, 10.1% in Huainan, and 13.6% in Fuyang. The total reclamation area is 68.8 km2, accounting for 34.5% of the subsidence governance area. At present, 276.1 km2 within the subsidence area has reached stable subsidence conditions, mainly distributed in the Huaibei mining area, which accounts for about 60% of the total stable subsidence area. Full article
Show Figures

Figure 1

20 pages, 74304 KiB  
Article
Enhancing Wetland Mapping: Integrating Sentinel-1/2, GEDI Data, and Google Earth Engine
by Hamid Jafarzadeh, Masoud Mahdianpari, Eric W. Gill and Fariba Mohammadimanesh
Sensors 2024, 24(5), 1651; https://doi.org/10.3390/s24051651 - 3 Mar 2024
Viewed by 1497
Abstract
Wetlands are amongst Earth’s most dynamic and complex ecological resources, serving productive and biodiverse ecosystems. Enhancing the quality of wetland mapping through Earth observation (EO) data is essential for improving effective management and conservation practices. However, the achievement of reliable and accurate wetland [...] Read more.
Wetlands are amongst Earth’s most dynamic and complex ecological resources, serving productive and biodiverse ecosystems. Enhancing the quality of wetland mapping through Earth observation (EO) data is essential for improving effective management and conservation practices. However, the achievement of reliable and accurate wetland mapping faces challenges due to the heterogeneous and fragmented landscape of wetlands, along with spectral similarities among different wetland classes. The present study aims to produce advanced 10 m spatial resolution wetland classification maps for four pilot sites on the Island of Newfoundland in Canada. Employing a comprehensive and multidisciplinary approach, this research leverages the synergistic use of optical, synthetic aperture radar (SAR), and light detection and ranging (LiDAR) data. It focuses on ecological and hydrological interpretation using multi-source and multi-sensor EO data to evaluate their effectiveness in identifying wetland classes. The diverse data sources include Sentinel-1 and -2 satellite imagery, Global Ecosystem Dynamics Investigation (GEDI) LiDAR footprints, the Multi-Error-Removed Improved-Terrain (MERIT) Hydro dataset, and the European ReAnalysis (ERA5) dataset. Elevation data and topographical derivatives, such as slope and aspect, were also included in the analysis. The study evaluates the added value of incorporating these new data sources into wetland mapping. Using the Google Earth Engine (GEE) platform and the Random Forest (RF) model, two main objectives are pursued: (1) integrating the GEDI LiDAR footprint heights with multi-source datasets to generate a 10 m vegetation canopy height (VCH) map and (2) seeking to enhance wetland mapping by utilizing the VCH map as an input predictor. Results highlight the significant role of the VCH variable derived from GEDI samples in enhancing wetland classification accuracy, as it provides a vertical profile of vegetation. Accordingly, VCH reached the highest accuracy with a coefficient of determination (R2) of 0.69, a root-mean-square error (RMSE) of 1.51 m, and a mean absolute error (MAE) of 1.26 m. Leveraging VCH in the classification procedure improved the accuracy, with a maximum overall accuracy of 93.45%, a kappa coefficient of 0.92, and an F1 score of 0.88. This study underscores the importance of multi-source and multi-sensor approaches incorporating diverse EO data to address various factors for effective wetland mapping. The results are expected to benefit future wetland mapping studies. Full article
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 16727 KiB  
Review
Bibliometric Analysis of Weather Radar Research from 1945 to 2024: Formations, Developments, and Trends
by Yin Liu
Sensors 2024, 24(11), 3531; https://doi.org/10.3390/s24113531 - 30 May 2024
Viewed by 396
Abstract
In the development of meteorological detection technology and services, weather radar undoubtedly plays a pivotal role, especially in the monitoring and early warning of severe convective weather events, where it serves an irreplaceable function. This research delves into the landscape of weather radar [...] Read more.
In the development of meteorological detection technology and services, weather radar undoubtedly plays a pivotal role, especially in the monitoring and early warning of severe convective weather events, where it serves an irreplaceable function. This research delves into the landscape of weather radar research from 1945 to 2024, employing scientometric methods to investigate 13,981 publications from the Web of Science (WoS) core collection database. This study aims to unravel, for the first time, the foundational structures shaping the knowledge domain of weather radar over an 80-year period, exploring general features, collaboration, co-citation, and keyword co-occurrence. Key findings reveal a significant surge in both publications and citations post-1990, peaking in 2022 with 1083 publications and 13832 citations, signaling sustained growth and interest in the field after a period of stagnation. The United States, China, and European countries emerge as key drivers of weather radar research, with robust international collaboration playing a pivotal role in the field’s rapid evolution. Analysis uncovers 30 distinct co-citation clusters, showcasing the progression of weather radar knowledge structures. Notably, deep learning emerges as a dynamic cluster, garnering attention and yielding substantial outcomes in contemporary research efforts. Over eight decades, the focus of weather radar investigations has transitioned from hardware and software enhancements to Artificial Intelligence (AI) technology integration and multifunctional applications across diverse scenarios. This study identifies four key areas for future research: leveraging AI technology, advancing all-weather observation techniques, enhancing system refinement, and fostering networked collaborative observation technologies. This research endeavors to support academics by offering an in-depth comprehension of the progression of weather radar research. The findings can be a valuable resource for scholars in efficiently locating pertinent publications and journals. Furthermore, policymakers can rely on the insights gleaned from this study as a well-organized reference point. Full article
Show Figures

Figure 1

Back to TopTop