sensors-logo

Journal Browser

Journal Browser

Innovations in Photogrammetry and Remote Sensing: Modern Sensors, New Processing Strategies and Advances in Applications (Volume II)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: 30 October 2024 | Viewed by 18218

Special Issue Editors


E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

This Special Issue is seeking papers showing the progress made in key areas of photogrammetry and remote sensing. Papers focused on modern and/or forthcoming sensors, improvements in data processing strategies, and the assessment of their reliability are welcome for the Special Issue. Additionally, we aim to collect papers devoted to the application of such innovations as proof of the contribution made by the observation of the natural and built environment and our understanding of phenomena at the required spatial scale. In particular, the following topics may be addressed in the proposed submissions:

  • Forthcoming sensors in photogrammetry and remote sensing;
  • Quality assurance/quality control (QA/QC);
  • Potentialities offered by multi-sensors data fusion;
  • Methodologies for near real-time mapping and monitoring from aerial/satellite platforms;
  • Big dataset handling;
  • Artificial Intelligence for data processing;
  • 3D modelling;
  • Error budgets;
  • Novel approaches for the processing of multi-temporal data;
  • Design, testing, and applications of new sensors.

Dr. Francesco Pirotti
Prof. Dr. Francesco Mancini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 12392 KiB  
Article
Craniofacial 3D Morphometric Analysis with Smartphone-Based Photogrammetry
by Omar C. Quispe-Enriquez, Juan José Valero-Lanzuela and José Luis Lerma
Sensors 2024, 24(1), 230; https://doi.org/10.3390/s24010230 - 30 Dec 2023
Cited by 1 | Viewed by 918
Abstract
Obtaining 3D craniofacial morphometric data is essential in a variety of medical and educational disciplines. In this study, we explore smartphone-based photogrammetry with photos and video recordings as an effective tool to create accurate and accessible metrics from head 3D models. The research [...] Read more.
Obtaining 3D craniofacial morphometric data is essential in a variety of medical and educational disciplines. In this study, we explore smartphone-based photogrammetry with photos and video recordings as an effective tool to create accurate and accessible metrics from head 3D models. The research involves the acquisition of craniofacial 3D models on both volunteers and head mannequins using a Samsung Galaxy S22 smartphone. For the photogrammetric processing, Agisoft Metashape v 1.7 and PhotoMeDAS software v 1.7 were used. The Academia 50 white-light scanner was used as reference data (ground truth). A comparison of the obtained 3D meshes was conducted, yielding the following results: 0.22 ± 1.29 mm for photogrammetry with camera photos, 0.47 ± 1.43 mm for videogrammetry with video frames, and 0.39 ± 1.02 mm for PhotoMeDAS. Similarly, anatomical points were measured and linear measurements extracted, yielding the following results: 0.75 mm for photogrammetry, 1 mm for videogrammetry, and 1.25 mm for PhotoMeDAS, despite large differences found in data acquisition and processing time among the four approaches. This study suggests the possibility of integrating photogrammetry either with photos or with video frames and the use of PhotoMeDAS to obtain overall craniofacial 3D models with significant applications in the medical fields of neurosurgery and maxillofacial surgery. Full article
Show Figures

Figure 1

24 pages, 8038 KiB  
Article
Multispectral UAV Data and GPR Survey for Archeological Anomaly Detection Supporting 3D Reconstruction
by Diego Ronchi, Marco Limongiello, Emanuel Demetrescu and Daniele Ferdani
Sensors 2023, 23(5), 2769; https://doi.org/10.3390/s23052769 - 02 Mar 2023
Cited by 8 | Viewed by 2580
Abstract
Archeological prospection and 3D reconstruction are increasingly combined in large archeological projects that serve both site investigation and dissemination of results. This paper describes and validates a method for using multispectral imagery captured by unmanned aerial vehicles (UAVs), subsurface geophysical surveys, and stratigraphic [...] Read more.
Archeological prospection and 3D reconstruction are increasingly combined in large archeological projects that serve both site investigation and dissemination of results. This paper describes and validates a method for using multispectral imagery captured by unmanned aerial vehicles (UAVs), subsurface geophysical surveys, and stratigraphic excavations to evaluate the role of 3D semantic visualizations for the collected data. The information recorded by various methods will be experimentally reconciled using the Extended Matrix and other original open-source tools, keeping both the scientific processes that generated them and the derived data separate, transparent, and reproducible. This structured information makes immediately accessible the required variety of sources useful for interpretation and reconstructive hypotheses. The application of the methodology will use the first available data from a five-year multidisciplinary investigation project at Tres Tabernae, a Roman site near Rome, where numerous non-destructive technologies, as well as excavation campaigns, will be progressively deployed to explore the site and validate the approaches. Full article
Show Figures

Figure 1

18 pages, 21254 KiB  
Article
A Combined Physical and Mathematical Calibration Method for Low-Cost Cameras in the Air and Underwater Environment
by Zhenling Ma, Xu Zhong, Hong Xie, Yongjun Zhou, Yuan Chen and Jiali Wang
Sensors 2023, 23(4), 2041; https://doi.org/10.3390/s23042041 - 11 Feb 2023
Cited by 3 | Viewed by 1251
Abstract
Low-cost camera calibration is vital in air and underwater photogrammetric applications. However, various lens distortions and the underwater environment influence are difficult to be covered by a universal distortion compensation model, and the residual distortions may still remain after conventional calibration. In this [...] Read more.
Low-cost camera calibration is vital in air and underwater photogrammetric applications. However, various lens distortions and the underwater environment influence are difficult to be covered by a universal distortion compensation model, and the residual distortions may still remain after conventional calibration. In this paper, we propose a combined physical and mathematical camera calibration method for low-cost cameras, which can adapt to both in-air and underwater environments. The commonly used physical distortion models are integrated to describe the image distortions. The combination is a high-order polynomial, which can be considered as basis functions to successively approximate the image deformation from the point of view of mathematical approximation. The calibration process is repeated until certain criteria are met and the distortions are reduced to a minimum. At the end, several sets of distortion parameters and stable camera interior orientation (IO) parameters act as the final camera calibration results. The Canon and GoPro in-air calibration experiments show that GoPro owns distortions seven times larger than Canon. Most Canon distortions have been described with the Australis model, while most decentering distortions for GoPro still exist. Using the proposed method, all the Canon and GoPro distortions are decreased to close to 0 after four calibrations. Meanwhile, the stable camera IO parameters are obtained. The GoPro Hero 5 Black underwater calibration indicates that four sets of distortion parameters and stable camera IO parameters are obtained after four calibrations. The camera calibration results show a difference between the underwater environment and air owing to the refractive and asymmetric environment effects. In summary, the proposed method improves the accuracy compared with the conventional method, which could be a flexible way to calibrate low-cost cameras for high accurate in-air and underwater measurement and 3D modeling applications. Full article
Show Figures

Figure 1

22 pages, 20323 KiB  
Article
Quantifying the Influence of Surface Texture and Shape on Structure from Motion 3D Reconstructions
by Mikkel Schou Nielsen, Ivan Nikolov, Emil Krog Kruse, Jørgen Garnæs and Claus Brøndgaard Madsen
Sensors 2023, 23(1), 178; https://doi.org/10.3390/s23010178 - 24 Dec 2022
Cited by 2 | Viewed by 1637
Abstract
In general, optical methods for geometrical measurements are influenced by the surface properties of the examined object. In Structure from Motion (SfM), local variations in surface color or topography are necessary for detecting feature points for point-cloud triangulation. Thus, the level of contrast [...] Read more.
In general, optical methods for geometrical measurements are influenced by the surface properties of the examined object. In Structure from Motion (SfM), local variations in surface color or topography are necessary for detecting feature points for point-cloud triangulation. Thus, the level of contrast or texture is important for an accurate reconstruction. However, quantitative studies of the influence of surface texture on geometrical reconstruction are largely missing. This study tries to remedy that by investigating the influence of object texture levels on reconstruction accuracy using a set of reference artifacts. The artifacts are designed with well-defined surface geometries, and quantitative metrics are introduced to evaluate the lateral resolution, vertical geometric variation, and spatial–frequency information of the reconstructions. The influence of texture level is compared to variations in capturing range. For the SfM measurements, the ContextCapture software solution and a 50 Mpx DSLR camera are used. The findings are compared to results using calibrated optical microscopes. The results show that the proposed pipeline can be used for investigating the influence of texture on SfM reconstructions. The introduced metrics allow for a quantitative comparison of the reconstructions at varying texture levels and ranges. Both range and texture level are seen to affect the reconstructed geometries although in different ways. While an increase in range at a fixed focal length reduces the spatial resolution, an insufficient texture level causes an increased noise level and may introduce errors in the reconstruction. The artifacts are designed to be easily replicable, and by providing a step-by-step procedure of our testing and comparison methodology, we hope that other researchers will make use of the proposed testing pipeline. Full article
Show Figures

Graphical abstract

19 pages, 15642 KiB  
Article
Accuracy Verification of Surface Models of Architectural Objects from the iPad LiDAR in the Context of Photogrammetry Methods
by Piotr Łabędź, Krzysztof Skabek, Paweł Ozimek, Dominika Rola, Agnieszka Ozimek and Ksenia Ostrowska
Sensors 2022, 22(21), 8504; https://doi.org/10.3390/s22218504 - 04 Nov 2022
Cited by 8 | Viewed by 3132
Abstract
The creation of accurate three-dimensional models has been radically simplified in recent years by developing photogrammetric methods. However, the photogrammetric procedure requires complex data processing and does not provide an immediate 3D model, so its use during field (in situ) surveys is infeasible. [...] Read more.
The creation of accurate three-dimensional models has been radically simplified in recent years by developing photogrammetric methods. However, the photogrammetric procedure requires complex data processing and does not provide an immediate 3D model, so its use during field (in situ) surveys is infeasible. This paper presents the mapping of fragments of built structures at different scales (finest detail, garden sculpture, architectural interior, building facade) by using a LiDAR sensor from the Apple iPad Pro mobile device. The resulting iPad LiDAR and photogrammetric models were compared with reference models derived from laser scanning and point measurements. For small objects with complex geometries acquired by iPad LiDAR, up to 50% of points were unaligned with the reference models, which is much more than for photogrammetric models. This was primarily due to much less frequent sampling and, consequently, a sparser grid. This simplification of object surfaces is highly beneficial in the case of walls and building facades as it smooths out their surfaces. The application potential of the IPad LiDAR Pro is severely constrained by its range cap being 5 m, which greatly limits the size of objects that can be recorded, and excludes most buildings. Full article
Show Figures

Figure 1

24 pages, 13184 KiB  
Article
Combining Photogrammetry and Photometric Stereo to Achieve Precise and Complete 3D Reconstruction
by Ali Karami, Fabio Menna and Fabio Remondino
Sensors 2022, 22(21), 8172; https://doi.org/10.3390/s22218172 - 25 Oct 2022
Cited by 19 | Viewed by 5296
Abstract
Image-based 3D reconstruction has been employed in industrial metrology for micro-measurements and quality control purposes. However, generating a highly-detailed and reliable 3D reconstruction of non-collaborative surfaces is still an open issue. In this paper, a method for generating an accurate 3D reconstruction of [...] Read more.
Image-based 3D reconstruction has been employed in industrial metrology for micro-measurements and quality control purposes. However, generating a highly-detailed and reliable 3D reconstruction of non-collaborative surfaces is still an open issue. In this paper, a method for generating an accurate 3D reconstruction of non-collaborative surfaces through a combination of photogrammetry and photometric stereo is presented. On one side, the geometric information derived with photogrammetry is used in areas where its 3D measurements are reliable. On the other hand, the high spatial resolution capability of photometric stereo is exploited to acquire a finely detailed topography of the surface. Finally, three different approaches are proposed to fuse both geometric information and high frequency details. The proposed method is tested on six different non-collaborative objects with different surface characteristics. To evaluate the accuracy of the proposed method, a comprehensive cloud-to-cloud comparison between reference data and 3D points derived from the proposed fusion methods is provided. The experiments demonstrated that, despite correcting global deformation up to an average RMSE of less than 0.1 mm, the proposed method recovers the surface topography at the same high resolution as the photometric stereo. Full article
Show Figures

Figure 1

Review

Jump to: Research

17 pages, 508 KiB  
Review
A Critical Review of Remote Sensing Approaches and Deep Learning Techniques in Archaeology
by Israa Kadhim and Fanar M. Abed
Sensors 2023, 23(6), 2918; https://doi.org/10.3390/s23062918 - 08 Mar 2023
Cited by 4 | Viewed by 2118
Abstract
To date, comprehensive reviews and discussions of the strengths and limitations of Remote Sensing (RS) standalone and combination approaches, and Deep Learning (DL)-based RS datasets in archaeology have been limited. The objective of this paper is, therefore, to review and critically discuss existing [...] Read more.
To date, comprehensive reviews and discussions of the strengths and limitations of Remote Sensing (RS) standalone and combination approaches, and Deep Learning (DL)-based RS datasets in archaeology have been limited. The objective of this paper is, therefore, to review and critically discuss existing studies that have applied these advanced approaches in archaeology, with a specific focus on digital preservation and object detection. RS standalone approaches including range-based and image-based modelling (e.g., laser scanning and SfM photogrammetry) have several disadvantages in terms of spatial resolution, penetrations, textures, colours, and accuracy. These limitations have led some archaeological studies to fuse/integrate multiple RS datasets to overcome limitations and produce comparatively detailed outcomes. However, there are still knowledge gaps in examining the effectiveness of these RS approaches in enhancing the detection of archaeological remains/areas. Thus, this review paper is likely to deliver valuable comprehension for archaeological studies to fill knowledge gaps and further advance exploration of archaeological areas/features using RS along with DL approaches. Full article
Show Figures

Figure 1

Back to TopTop