remotesensing-logo

Journal Browser

Journal Browser

Deep Learning Applications of 3D Reconstruction and Visualization from Remote Sensing Imagery

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 918

Special Issue Editors


E-Mail Website
Guest Editor
German Aerospace Center (DLR), Institute of Optical Sensor Systems, Rutherfordstr. 2, D-12489 Berlin, Germany
Interests: artificial intelligence as a counterpart to traditional photogrammetric approaches; remote sensing in search and rescue operations; calibration and validation of heterogenous sensor concepts and their accuracy/quality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands
Interests: geometric and radiometric sensors; sensor fusion; calibration of imageries; signal/image processing; mission planning; navigation and position/orientation; machine learning; simultaneous localization and mapping; regulations and economic impact; agriculture; geosciences; urban area; architecture; monitoring/change detection; education; unmanned aerial vehicles (UAV)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning applications arise and thrive in various fields, including education, healthcare, marketing and advertising, cybersecurity, and natural language processing. However, the number of applications, new approaches, and network architectures has grown rapidly, especially in remote sensing. Related research ranges from automation, enhanced spatial understanding, disaster management, and robotics to fundamental research.

Even if some algorithms and approaches have been known for decades, exciting new approaches are constantly emerging from various combinations and are being developed.

This Special Issue aims to cover recent advancements in deep learning methods in the field of 3D reconstruction and geo-visualization. Both original research and review articles are welcome. Topics include, but are not limited to, the following:

  •     Multi-spectral and hyperspectral remote sensing;
  •     Lidar and laser scanning;
  •     Geometric reconstruction;
  •     Physical modeling and signatures;
  •     Change detection;
  •     Image processing and pattern recognition;
  •     Remote sensing applications.

Prof. Dr. Henry Meißner
Prof. Dr. Francesco Nex
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • 3D reconstruction
  • visualization
  • disaster management
  • enhanced spatial understanding
  • algorithms

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 2903 KiB  
Article
STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields
by Kaidi Ma, Peixun Liu, Haijiang Sun and Jiawei Teng
Remote Sens. 2024, 16(13), 2327; https://doi.org/10.3390/rs16132327 - 26 Jun 2024
Viewed by 663
Abstract
Since Neural Radiation Field (NeRF) was first proposed, a large number of studies dedicated to them have emerged. These fields achieved very good results in their respective contexts, but they are not sufficiently practical for our project. If we want to obtain novel [...] Read more.
Since Neural Radiation Field (NeRF) was first proposed, a large number of studies dedicated to them have emerged. These fields achieved very good results in their respective contexts, but they are not sufficiently practical for our project. If we want to obtain novel images of satellites photographed in space by another satellite, we must face problems like inaccurate camera focal lengths and poor image texture. There are also some small structures on satellites that NeRF-like algorithms cannot render well. In these cases, the NeRF’s performance cannot sufficiently meet the project’s needs. In fact, the images rendered by the NeRF will have many incomplete structures, while the MipNeRF will blur the edges of the structures on the satellite and obtain unrealistic colors. In response to these problems, we proposed STs-NeRF, which improves the quality of the new perspective images through an encoding module and a new network structure. We found a method for calculating poses that are suitable for our dataset and that enhance the network’s input learning effect by recoding the sampling points and viewing directions through a dynamic encoding (DE) module. Then, we input them into our layer-by-layer normalized multi-layer perceptron (LLNMLP). By simultaneously inputting points and directions into the network, we avoid the mutual influence between light rays, and through layer-by-layer normalization, we ease the model’s overfitting from a training perspective. Since real images should not be made public, we created a synthetic dataset and conducted a series of experiments. The experiments showed that our method achieves the best results in reconstructing captured satellite images, compared with the NeRF, the MipNeRF, the NeuS and the NeRF2Mesh, and improves the Peak Signal-to-Noise Ratio (PSNR) by 19%. We have also tested on public datasets, and our NeRF can still render acceptable images on datasets with better textures. Full article
Show Figures

Graphical abstract

Back to TopTop