Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = terrain-aware imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 5066 KB  
Article
DSM-Seg: A CNN-RWKV Hybrid Framework for Forward-Looking Sonar Image Segmentation in Deep-Sea Mining
by Xinran Liu, Jianmin Yang, Enhua Zhang, Wenhao Xu and Changyu Lu
Remote Sens. 2025, 17(17), 2997; https://doi.org/10.3390/rs17172997 - 28 Aug 2025
Viewed by 265
Abstract
Accurate and real-time environmental perception is essential for the safe and efficient execution of deep-sea mining operations. Semantic segmentation of forward-looking sonar (FLS) images plays a pivotal role in enabling environmental awareness for deep-sea mining vehicles (DSMVs), but remains challenging due to strong [...] Read more.
Accurate and real-time environmental perception is essential for the safe and efficient execution of deep-sea mining operations. Semantic segmentation of forward-looking sonar (FLS) images plays a pivotal role in enabling environmental awareness for deep-sea mining vehicles (DSMVs), but remains challenging due to strong acoustic noise, blurred object boundaries, and long-range semantic dependencies. To address these issues, this study proposes DSM-Seg, a novel hybrid segmentation architecture combining Convolutional Neural Networks (CNNs) and Receptance Weighted Key-Value (RWKV) modeling. The architecture integrates a Physical Prior-Based Semantic Guidance Module (PSGM), which utilizes sonar-specific physical priors to produce high-confidence semantic guidance maps, thereby enhancing the delineation of target boundaries. In addition, a RWKV-Based Global Fusion with Semantic Constraints (RGFSC) module is introduced to suppress cross-regional interference in long-range dependency modeling and achieve the effective fusion of local and global semantic information. Extensive experiments on both a self-collected seabed terrain dataset and a public marine debris dataset demonstrate that DSM-Seg significantly improves segmentation accuracy under complex conditions while satisfying real-time performance requirements. These results highlight the potential of the proposed method to support intelligent environmental perception in DSMV applications. Full article
Show Figures

Figure 1

13 pages, 3172 KB  
Article
A Simulation Framework for Zoom-Aided Coverage Path Planning with UAV-Mounted PTZ Cameras
by Natalia Chacon Rios, Sabyasachi Mondal and Antonios Tsourdos
Sensors 2025, 25(17), 5220; https://doi.org/10.3390/s25175220 - 22 Aug 2025
Viewed by 554
Abstract
Achieving energy-efficient aerial coverage remains a significant challenge for UAV-based missions, especially over hilly terrain where consistent ground resolution is needed. Traditional solutions use changes in altitude to compensate for elevation changes, which requires a significant amount of energy. This paper presents a [...] Read more.
Achieving energy-efficient aerial coverage remains a significant challenge for UAV-based missions, especially over hilly terrain where consistent ground resolution is needed. Traditional solutions use changes in altitude to compensate for elevation changes, which requires a significant amount of energy. This paper presents a new way to plan coverage paths (CPP) that uses real-time zoom control of a pan–tilt–zoom (PTZ) camera to keep the ground sampling distance (GSD)—the distance between two consecutive pixel centers projected onto the ground—constant without changing the UAV’s altitude. The proposed algorithm changes the camera’s focal length based on the height of the terrain. It only changes the altitude when the zoom limits are reached. Simulation results on a variety of terrain profiles show that the zoom-based CPP substantially reduces flight duration and path length compared to traditional altitude-based strategies. The framework can also be used with low-cost camera systems with limited zoom capability, thereby improving operational feasibility. These findings establish a basis for further development and field validation in upcoming research phases. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems in Precision Agriculture)
Show Figures

Figure 1

14 pages, 16969 KB  
Article
FTT: A Frequency-Aware Texture Matching Transformer for Digital Bathymetry Model Super-Resolution
by Peikun Xiao, Jianping Wu and Yingjie Wang
J. Mar. Sci. Eng. 2025, 13(7), 1365; https://doi.org/10.3390/jmse13071365 - 17 Jul 2025
Viewed by 278
Abstract
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, [...] Read more.
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, which leads to serious distortions and inaccuracies. Given the critical role of HR DBM in marine resource exploitation, economic development, and scientific innovation, we propose a frequency-aware texture matching transformer (FTT) for DBM SR, incorporating global terrain feature extraction (GTFE), high-frequency feature extraction (HFFE), and a terrain matching block (TMB). GTFE has the capability to perceive spatial heterogeneity and spatial locations, allowing it to accurately capture large-scale terrain features. HFFE can explicitly extract high-frequency priors beneficial for DBM SR and implicitly refine the representation of high-frequency information in the global terrain feature. TMB improves fidelity of generated HR DBM by generating position offsets to restore warped textures in deep features. Experimental results have demonstrated that the proposed FTT has superior performance in terms of elevation, slope, aspect, and fidelity of generated HR DBM. Notably, the root mean square error (RMSE) of elevation in steep terrain has been reduced by 4.89 m, which is a significant improvement in the accuracy and precision of the reconstruction. This research holds significant implications for improving the accuracy of DBM SR methods and the usefulness of HR bathymetry products for future marine research. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

18 pages, 12097 KB  
Article
Adaptive Outdoor Cleaning Robot with Real-Time Terrain Perception and Fuzzy Control
by Raul Fernando Garcia Azcarate, Akhil Jayadeep, Aung Kyaw Zin, James Wei Shung Lee, M. A. Viraj J. Muthugala and Mohan Rajesh Elara
Mathematics 2025, 13(14), 2245; https://doi.org/10.3390/math13142245 - 10 Jul 2025
Viewed by 575
Abstract
Outdoor cleaning robots must operate reliably across diverse and unstructured surfaces, yet many existing systems lack the adaptability to handle terrain variability. This paper proposes a terrain-aware cleaning framework that dynamically adjusts robot behavior based on real-time surface classification and slope estimation. A [...] Read more.
Outdoor cleaning robots must operate reliably across diverse and unstructured surfaces, yet many existing systems lack the adaptability to handle terrain variability. This paper proposes a terrain-aware cleaning framework that dynamically adjusts robot behavior based on real-time surface classification and slope estimation. A 128-channel LiDAR sensor captures signal intensity images, which are processed by a ResNet-18 convolutional neural network to classify floor types as wood, smooth, or rough. Simultaneously, pitch angles from an onboard IMU detect terrain inclination. These inputs are transformed into fuzzy sets and evaluated using a Mamdani-type fuzzy inference system. The controller adjusts brush height, brush speed, and robot velocity through 81 rules derived from 48 structured cleaning experiments across varying terrain and slopes. Validation was conducted in low-light (night-time) conditions, leveraging LiDAR’s lighting-invariant capabilities. Field trials confirm that the robot responds effectively to environmental conditions, such as reducing speed on slopes or increasing brush pressure on rough surfaces. The integration of deep learning and fuzzy control enables safe, energy-efficient, and adaptive cleaning in complex outdoor environments. This work demonstrates the feasibility and real-world applicability for combining perception and inference-based control in terrain-adaptive robotic systems. Full article
(This article belongs to the Special Issue Research and Applications of Neural Networks and Fuzzy Logic)
Show Figures

Figure 1

16 pages, 8923 KB  
Article
A Geometric Significance-Aware Deep Mutual Learning Network for Building Extraction from Aerial Images
by Ming Hao, Huijing Lin, Shilin Chen, Weiqiang Luo, Hua Zhang and Nanshan Zheng
Drones 2024, 8(10), 593; https://doi.org/10.3390/drones8100593 - 18 Oct 2024
Cited by 1 | Viewed by 1056
Abstract
Knowledge-driven building extraction method exhibits a restricted adaptability scope and is vulnerable to external factors that affect its extraction accuracy. On the other hand, data-driven building extraction method lacks interpretability, heavily relies on extensive training data, and may result in extraction outcomes with [...] Read more.
Knowledge-driven building extraction method exhibits a restricted adaptability scope and is vulnerable to external factors that affect its extraction accuracy. On the other hand, data-driven building extraction method lacks interpretability, heavily relies on extensive training data, and may result in extraction outcomes with building boundary blur issues. The integration of pre-existing knowledge with data-driven learning is essential for the intelligent identification and extraction of buildings from high-resolution aerial images. To overcome the limitations of current deep learning building extraction networks in effectively leveraging prior knowledge of aerial images, a geometric significance-aware deep mutual learning network (GSDMLNet) is proposed. Firstly, the GeoSay algorithm is utilized to derive building geometric significance feature maps as prior knowledge and integrate them into the deep learning network to enhance the targeted extraction of building features. Secondly, a bi-directional guidance attention module (BGAM) is developed to facilitate deep mutual learning between the building feature map and the building geometric significance feature map within the dual-branch network. Furthermore, the deployment of an enhanced flow alignment module (FAM++) is utilized to produce high-resolution, robust semantic feature maps with strong interpretability. Ultimately, a multi-objective loss function is crafted to refine the network’s performance. Experimental results demonstrate that the GSDMLNet excels in building extraction tasks within densely populated and diverse urban areas, reducing misidentification of shadow-obscured regions and color-similar terrains lacking building structural features. This approach effectively ensures the precise acquisition of urban building information in aerial images. Full article
Show Figures

Figure 1

17 pages, 5949 KB  
Article
Influence of Camera Placement on UGV Teleoperation Efficiency in Complex Terrain
by Karol Cieślik, Piotr Krogul, Tomasz Muszyński, Mirosław Przybysz, Arkadiusz Rubiec and Rafał Kamil Typiak
Appl. Sci. 2024, 14(18), 8297; https://doi.org/10.3390/app14188297 - 14 Sep 2024
Viewed by 1455
Abstract
Many fields, where human health and life are at risk, are increasingly utilizing mobile robots and UGVs (Unmanned Ground Vehicles). They typically operate in teleoperation mode (control based on the projected image, outside the operator’s direct field of view), as autonomy is not [...] Read more.
Many fields, where human health and life are at risk, are increasingly utilizing mobile robots and UGVs (Unmanned Ground Vehicles). They typically operate in teleoperation mode (control based on the projected image, outside the operator’s direct field of view), as autonomy is not yet sufficiently developed and key decisions should be made by the man. Fast and effective decision making requires a high level of situational and action awareness. It relies primarily on visualizing the robot’s surroundings and end effectors using cameras and displays. This study aims to compare the effectiveness of three solutions of robot area imaging systems using the simultaneous transmission of images from three cameras while driving a UGV in complex terrain. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

19 pages, 9015 KB  
Article
A Deep Learning-Enhanced Multi-Modal Sensing Platform for Robust Human Object Detection and Tracking in Challenging Environments
by Peng Cheng, Zinan Xiong, Yajie Bao, Ping Zhuang, Yunqi Zhang, Erik Blasch and Genshe Chen
Electronics 2023, 12(16), 3423; https://doi.org/10.3390/electronics12163423 - 12 Aug 2023
Cited by 8 | Viewed by 4041
Abstract
In modern security situations, tracking multiple human objects in real-time within challenging urban environments is a critical capability for enhancing situational awareness, minimizing response time, and increasing overall operational effectiveness. Tracking multiple entities enables informed decision-making, risk mitigation, and the safeguarding of civil-military [...] Read more.
In modern security situations, tracking multiple human objects in real-time within challenging urban environments is a critical capability for enhancing situational awareness, minimizing response time, and increasing overall operational effectiveness. Tracking multiple entities enables informed decision-making, risk mitigation, and the safeguarding of civil-military operations to ensure safety and mission success. This paper presents a multi-modal electro-optical/infrared (EO/IR) and radio frequency (RF) fused sensing (MEIRFS) platform for real-time human object detection, recognition, classification, and tracking in challenging environments. By utilizing different sensors in a complementary manner, the robustness of the sensing system is enhanced, enabling reliable detection and recognition results across various situations. Specifically designed radar tags and thermal tags can be used to discriminate between friendly and non-friendly objects. The system incorporates deep learning-based image fusion and human object recognition and tracking (HORT) algorithms to ensure accurate situation assessment. After integrating into an all-terrain robot, multiple ground tests were conducted to verify the consistency of the HORT in various environments. The MEIRFS sensor system has been designed to meet the Size, Weight, Power, and Cost (SWaP-C) requirements for installation on autonomous ground and aerial vehicles. Full article
Show Figures

Figure 1

17 pages, 20517 KB  
Article
Place Recognition with Memorable and Stable Cues for Loop Closure of Visual SLAM Systems
by Rafiqul Islam and Habibullah Habibullah
Robotics 2022, 11(6), 142; https://doi.org/10.3390/robotics11060142 - 4 Dec 2022
Cited by 5 | Viewed by 3208
Abstract
Visual Place Recognition (VPR) is a fundamental yet challenging task in Visual Simultaneous Localization and Mapping (V-SLAM) problems. The VPR works as a subsystem of the V-SLAM. VPR is the task of retrieving images upon revisiting the same place in different conditions. The [...] Read more.
Visual Place Recognition (VPR) is a fundamental yet challenging task in Visual Simultaneous Localization and Mapping (V-SLAM) problems. The VPR works as a subsystem of the V-SLAM. VPR is the task of retrieving images upon revisiting the same place in different conditions. The problem is even more difficult for agricultural and all-terrain autonomous mobile robots that work in different scenarios and weather conditions. Over the last few years, many state-of-the-art methods have been proposed to solve the limitations of existing VPR techniques. VPR using bag-of-words obtained from local features works well for a large-scale image retrieval problem. However, the aggregation of local features arbitrarily produces a large bag-of-words vector database, limits the capability of efficient feature learning, and aggregation and querying of candidate images. Moreover, aggregating arbitrary features is inefficient as not all local features equally contribute to long-term place recognition tasks. Therefore, a novel VPR architecture is proposed suitable for efficient place recognition with semantically meaningful local features and their 3D geometrical verifications. The proposed end-to-end architecture is fueled by a deep neural network, a bag-of-words database, and 3D geometrical verification for place recognition. This method is aware of meaningful and informative features of images for better scene understanding. Later, 3D geometrical information from the corresponding meaningful features is computed and utilised for verifying correct place recognition. The proposed method is tested on four well-known public datasets, and Micro Aerial Vehicle (MAV) recorded dataset for experimental validation from Victoria Park, Adelaide, Australia. The extensive experimental results considering standard evaluation metrics for VPR show that the proposed method produces superior performance than the available state-of-the-art methods. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

22 pages, 4474 KB  
Article
Standoff Infrared Measurements of Chemical Plume Dynamics in Complex Terrain Using a Combination of Active Swept-ECQCL Laser Spectroscopy with Passive Hyperspectral Imaging
by Mark C. Phillips, Bruce E. Bernacki, Patrick T. Conry and Michael J. Brown
Remote Sens. 2022, 14(15), 3756; https://doi.org/10.3390/rs14153756 - 5 Aug 2022
Cited by 4 | Viewed by 2863
Abstract
Chemical plume detection and modeling in complex terrain present numerous challenges. We present experimental results from outdoor releases of two chemical tracers (sulfur hexafluoride and Freon-152a) from different locations in mountainous terrain. Chemical plumes were detected using two standoff instruments collocated at a [...] Read more.
Chemical plume detection and modeling in complex terrain present numerous challenges. We present experimental results from outdoor releases of two chemical tracers (sulfur hexafluoride and Freon-152a) from different locations in mountainous terrain. Chemical plumes were detected using two standoff instruments collocated at a distance of 1.5 km from the plume releases. A passive long-wave infrared hyperspectral imaging system was used to show time- and space-resolved plume transport in regions near the source. An active infrared swept-wavelength external cavity quantum cascade laser system was used in a standoff configuration to measure quantitative chemical column densities with high time resolution and high sensitivity along a single measurement path. Both instruments provided chemical-specific detection of the plumes and provided complementary information over different temporal and spatial scales. The results show highly variable plume propagation dynamics near the release points, strongly dependent on the local topography and winds. Effects of plume stagnation, plume splitting, and plume mixing were all observed and are explained based on local topographic and wind conditions. Measured plume column densities at distances ~100 m from the release point show temporal fluctuations over ~1 s time scales and spatial variations over ~1 m length scales. The results highlight the need for high-speed and spatially resolved measurement techniques to provide validation data at the relevant spatial and temporal scales required for high-fidelity terrain-aware microscale plume propagation models. Full article
(This article belongs to the Special Issue Hyperspectral Remote Sensing: Current Situation and New Challenges)
Show Figures

Graphical abstract

30 pages, 1745 KB  
Article
Situational Awareness and Problems of Its Formation in the Tasks of UAV Behavior Control
by Dmitry M. Igonin, Pavel A. Kolganov and Yury V. Tiumentsev
Appl. Sci. 2021, 11(24), 11611; https://doi.org/10.3390/app112411611 - 7 Dec 2021
Cited by 8 | Viewed by 4955
Abstract
Situational awareness formation is one of the most critical elements in solving the problem of UAV behavior control. It aims to provide information support for UAV behavior control according to its objectives and tasks to be completed. We consider the UAV to be [...] Read more.
Situational awareness formation is one of the most critical elements in solving the problem of UAV behavior control. It aims to provide information support for UAV behavior control according to its objectives and tasks to be completed. We consider the UAV to be a type of controlled dynamic system. The article shows the place of UAVs in the hierarchy of dynamic systems. We introduce the concepts of UAV behavior and activity and formulate requirements for algorithms for controlling UAV behavior. We propose the concept of situational awareness as applied to the problem of behavior control of highly autonomous UAVs (HA-UAVs) and analyze the levels and types of this situational awareness. We show the specifics of situational awareness formation for UAVs and analyze its differences from situational awareness for manned aviation and remotely piloted UAVs. We propose the concept of situational awareness as applied to the problem of UAV behavior control and analyze the levels and types of this situational awareness. We highlight and discuss in more detail two crucial elements of situational awareness for HA-UAVs. The first of them is related to the analysis and prediction of the behavior of objects in the vicinity of the HA-UAV. The general considerations involved in solving this problem, including the problem of analyzing the group behavior of such objects, are discussed. As an illustrative example, the solution to the problem of tracking an aircraft maneuvering in the vicinity of a HA-UAV is given. The second element of situational awareness is related to the processing of visual information, which is one of the primary sources of situational awareness formation required for the operation of the HA-UAV control system. As an example here, we consider solving the problem of semantic segmentation of images processed when selecting a landing site for the HA-UAV in unfamiliar terrain. Both of these problems are solved using machine learning methods and tools. In the field of situational awareness for HA-UAVs, there are several problems that need to be solved. We formulate some of these problems and briefly describe them. Full article
Show Figures

Figure 1

21 pages, 6988 KB  
Article
Social-Ecological Archetypes of Land Degradation in the Nigerian Guinea Savannah: Insights for Sustainable Land Management
by Ademola A. Adenle and Chinwe Ifejika Speranza
Remote Sens. 2021, 13(1), 32; https://doi.org/10.3390/rs13010032 - 23 Dec 2020
Cited by 18 | Viewed by 8235
Abstract
The Nigerian Guinea Savannah is the most extensive ecoregion in Nigeria, a major food production area, and contains many biodiversity protection areas. However, there is limited understanding of the social-ecological features of its degraded lands and potential insights for sustainable land management and [...] Read more.
The Nigerian Guinea Savannah is the most extensive ecoregion in Nigeria, a major food production area, and contains many biodiversity protection areas. However, there is limited understanding of the social-ecological features of its degraded lands and potential insights for sustainable land management and governance. To fill this gap, the self-organizing map method was applied to identify the archetypes of both proximal and underlying drivers of land degradation in this region. Using 12 freely available spatial datasets of drivers of land degradation—4 environmental; 3 socio-economic; and 5 land-use management practices, the identified archetypes were intersected with the Moderate-Resolution Imaging Spectroradiometer (MODIS)-derived land-degradation status of the region, and the state administrative boundaries. Nine archetypes were identified. Archetypes are dominated by: (1) protected areas; (2) very high-density population; (3) moderately high information/knowledge access; (4) low literacy levels and moderate–high poverty levels; (5) rural remoteness; (6) remoteness from a major road; (7) very high livestock density; (8) moderate poverty level and nearly level terrain; and (9) very rugged terrain and remote from a major road. Four archetypes characterized by very high-density population, moderate–high information/knowledge access, and moderate–high poverty level, as well as remoteness from a major town, were associated with 61.3% large-area degradation; and the other five archetypes, covering 38.7% of the area, were responsible for small-area degradation. While different combinations of archetypes exist in all the states, the five states of Niger (40.5%), Oyo (29.6%), Kwara (24.4%), Nassarawa (18.6%), and Ekiti (17.6%), have the largest shares of the archetypes. To deal with these archetypical features, policies and practices that address increasing population in combination with poverty reduction; and that create awareness about land degradation and promote sustainable practices and various forms of land restoration, such as tree planting, are necessary for progressing towards land-degradation neutrality in the Nigerian Guinea Savannah. Full article
(This article belongs to the Special Issue Land Degradation Assessment with Earth Observation)
Show Figures

Figure 1

17 pages, 7830 KB  
Article
Fusion of Enhanced and Synthetic Vision System Images for Runway and Horizon Detection
by Ahmed F. Fadhil, Raghuveer Kanneganti, Lalit Gupta, Henry Eberle and Ravi Vaidyanathan
Sensors 2019, 19(17), 3802; https://doi.org/10.3390/s19173802 - 3 Sep 2019
Cited by 22 | Viewed by 5609
Abstract
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced [...] Read more.
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicle Networks, Systems and Applications)
Show Figures

Figure 1

16 pages, 8046 KB  
Article
Integrated UAV-Based Real-Time Mapping for Security Applications
by Daniel Hein, Thomas Kraft, Jörg Brauchle and Ralf Berger
ISPRS Int. J. Geo-Inf. 2019, 8(5), 219; https://doi.org/10.3390/ijgi8050219 - 8 May 2019
Cited by 28 | Viewed by 5182
Abstract
Security applications such as management of natural disasters and man-made incidents crucially depend on the rapid availability of a situation picture of the affected area. UAV-based remote sensing systems may constitute an essential tool for capturing aerial imagery in such scenarios. While several [...] Read more.
Security applications such as management of natural disasters and man-made incidents crucially depend on the rapid availability of a situation picture of the affected area. UAV-based remote sensing systems may constitute an essential tool for capturing aerial imagery in such scenarios. While several commercial UAV solutions already provide acquisition of high quality photos or real-time video transmission via radio link, generating instant high-resolution aerial maps is still an open challenge. For this purpose, the article presents a real-time processing tool chain, enabling generation of interactive aerial maps during flight. Key element of this tool chain is the combination of the Terrain Aware Image Clipping (TAC) algorithm and 12-bit JPEG compression. As a result, the data size of a common scenery can be reduced to approximately 0.4% of the original size, while preserving full geometric and radiometric resolution. Particular attention was paid to minimize computational costs to reduce hardware requirements. The full workflow was demonstrated using the DLR Modular Airborne Camera System (MACS) operated on a conventional aircraft. In combination with a commercial radio link, the latency between image acquisition and visualization in the ground station was about 2 s. In addition, the integration of a miniaturized version of the camera system into a small fixed-wing UAV is presented. It is shown that the described workflow is efficient enough to instantly generate image maps even on small UAV hardware. Using a radio link, these maps can be broadcasted to on-site operation centers and are immediately available to the end-users. Full article
(This article belongs to the Special Issue Innovative Sensing - From Sensors to Methods and Applications)
Show Figures

Figure 1

20 pages, 52237 KB  
Article
Visualization of Features in 3D Terrain
by Steve Dübel and Heidrun Schumann
ISPRS Int. J. Geo-Inf. 2017, 6(11), 357; https://doi.org/10.3390/ijgi6110357 - 14 Nov 2017
Cited by 11 | Viewed by 8812
Abstract
In 3D terrain analysis, topographical characteristics, such as mountains or valleys, and geo-spatial data characteristics, such as specific weather conditions or objects of interest, are important features. Visual representations of these features are essential in many application fields, e.g., aviation, meteorology, or geo-science. [...] Read more.
In 3D terrain analysis, topographical characteristics, such as mountains or valleys, and geo-spatial data characteristics, such as specific weather conditions or objects of interest, are important features. Visual representations of these features are essential in many application fields, e.g., aviation, meteorology, or geo-science. However, creating suitable representations is challenging. On the one hand, conveying the topography of terrain models is difficult, due to data complexity and computational costs. On the other hand, depicting further geo-spatial data increases the intricacy of the image and can lead to visual clutter. Moreover, perceptional issues within the 3D presentation, such as distance recognition, play a significant role as well. In this paper, we address the question of how features in the terrain can be visualized appropriately. We discuss various design options to facilitate the awareness of global and local features; that is, the coarse spatial distribution of characteristics and the fine-granular details. To improve spatial perception of the 3D environment, we propose suitable depth cues. Finally, we demonstrate the feasibility of our approach by a sophisticated framework called TedaVis that unifies the proposed concepts and facilitates designing visual terrain representations tailored to user requirements. Full article
(This article belongs to the Special Issue Leading Progress in Digital Terrain Analysis and Modeling)
Show Figures

Graphical abstract

Back to TopTop