sensors-logo

Journal Browser

Journal Browser

Advances in 3D Imaging and Multimodal Sensing Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (30 April 2024) | Viewed by 7671

Special Issue Editors

Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing (STIIMA), National Research Council of Italy, Via Amendola, 122 D/O, 70126 Bari, Italy
Interests: computer vision; image processing; pattern recognition; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing (STIIMA), National Research Council of Italy, Via Amendola, 122 D/O, 70126 Bari, Italy
Interests: machine learning; deep learning; signal processing; artificial intelligence and machine learning

E-Mail Website
Guest Editor
Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing (STIIMA), National Research Council of Italy, Via Amendola, 122 D/O, 70126 Bari, Italy
Interests: computer vision; deep learning; signal processing; image processing and analysis; artificial intelligence and machine learning; 3D laser scanning

Special Issue Information

Dear Colleagues,

Continuous advances in science and technology have wide implications in several fields, such as Industry 4.0, space development, marine biology, environmental protection, healthcare, biological applications, civil engineering, and cultural and heritage protection. Data is the basis for these advancements: thanks to data, machine and deep learning algorithms can be trained to overcome issues that would otherwise be infeasible using traditional methods. However, this raises the problem of gathering such amounts of data: this is where sensors play a key role. Sensors are the basis of data acquisition and, as a consequence, modern technological applications: almost every system, from the exploration of our universe to the development of complex nanomachinery, relies heavily on measurements performed by a variety of sensors, which are then used to improve the experiences in almost every IoT field.

Still, real-world processes are often complex and not limited to a single data source: as an example, modern biometric applications rely on multiple sources, such as the user’s voice and face, while smart monitoring systems in Industry 4.0 may detect process anomalies from an artifact by using data coming from both its reconstruction and its physical properties. Consequently, multimodal sensing applications widen the perspective of possible advancements, allowing for properly modeling complex systems via data fusion. In that sense, 3D imagery can further enrich the data provided by physical sensors, allowing for an even more complete representation of underlying processes and phenomena.

Apart from the specific application scenario, the common focus of such advancements, that is, employing advanced data acquisition techniques via multimodal sensors and 3D imagery, requires a strong multidisciplinary research effort.

The main purpose of this Special Issue is to collect innovative contributions in 3D Imaging and Multimodal Sensing Applications (e.g., data reconstruction, data fusion, anomaly detection), ranging from new methodologies to innovative approaches in different domains. Particular emphasis should be given to processing data gathered by multimodal sensor systems to solve common issues known in the literature and present innovative best practices.

Topics of interest include, but are not limited to:

  • Industry 4.0;
  • Smart agriculture;
  • Aerospace robotics and automation;
  • Digital;
  • Cultural Heritage;
  • Healthcare.

Dr. Vito Renò
Dr. Angelo Cardellicchio
Dr. Cosimo Patruno
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • machine learning
  • intelligent systems
  • machine perception
  • signal, 3D, and image processing
  • multimodal sensing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 10669 KiB  
Article
Adherent Peanut Image Segmentation Based on Multi-Modal Fusion
by Yujing Wang, Fang Ye, Jiusun Zeng, Jinhui Cai and Wangsen Huang
Sensors 2024, 24(14), 4434; https://doi.org/10.3390/s24144434 - 9 Jul 2024
Viewed by 543
Abstract
Aiming at the problem of the difficult segmentation of adherent images due to the not fully convex shape of peanut pods, their complex surface texture, and their diverse structures, a multimodal fusion algorithm is proposed to achieve a 2D segmentation of adherent peanut [...] Read more.
Aiming at the problem of the difficult segmentation of adherent images due to the not fully convex shape of peanut pods, their complex surface texture, and their diverse structures, a multimodal fusion algorithm is proposed to achieve a 2D segmentation of adherent peanut images with the assistance of 3D point clouds. Firstly, the point cloud of a running peanut is captured line by line using a line structured light imaging system, and its three-dimensional shape is obtained through splicing and combining it with a local surface-fitting algorithm to calculate a normal vector and curvature. Seed points are selected based on the principle of minimum curvature, and neighboring points are searched using the KD-Tree algorithm. The point cloud is filtered and segmented according to the normal angle and the curvature threshold until achieving the completion of the point cloud segmentation of the individual peanut, and then the two-dimensional contour of the individual peanut model is extracted by using the rolling method. The search template is established, multiscale feature matching is implemented on the adherent image to achieve the region localization, and finally, the segmentation region is optimized by an opening operation. The experimental results show that the algorithm improves the segmentation accuracy, and the segmentation accuracy reaches 96.8%. Full article
(This article belongs to the Special Issue Advances in 3D Imaging and Multimodal Sensing Applications)
Show Figures

Figure 1

24 pages, 4289 KiB  
Article
Identification of a Person in a Trajectory Based on Wearable Sensor Data Analysis
by Jinzhe Yan, Masahiro Toyoura and Xiangyang Wu
Sensors 2024, 24(11), 3680; https://doi.org/10.3390/s24113680 - 6 Jun 2024
Cited by 1 | Viewed by 960
Abstract
Human trajectories can be tracked by the internal processing of a camera as an edge device. This work aims to match peoples’ trajectories obtained from cameras to sensor data such as acceleration and angular velocity, obtained from wearable devices. Since human trajectory and sensor [...] Read more.
Human trajectories can be tracked by the internal processing of a camera as an edge device. This work aims to match peoples’ trajectories obtained from cameras to sensor data such as acceleration and angular velocity, obtained from wearable devices. Since human trajectory and sensor data differ in modality, the matching method is not straightforward. Furthermore, complete trajectory information is unavailable; it is difficult to determine which fragments belong to whom. To solve this problem, we newly proposed the SyncScore model to find the similarity between a unit period trajectory and the corresponding sensor data. We also propose a Likelihood Fusion algorithm that systematically updates the similarity data and integrates it over time while keeping other trajectories in mind. We confirmed that the proposed method can match human trajectories and sensor data with an accuracy, a sensitivity, and an F1 of 0.725. Our models achieved decent results on the UEA dataset. Full article
(This article belongs to the Special Issue Advances in 3D Imaging and Multimodal Sensing Applications)
Show Figures

Figure 1

24 pages, 8579 KiB  
Article
Evaluating a 3D Ultrasound Imaging Resolution of Single Transmitter/Receiver with Coding Mask by Extracting Phase Information
by Mohammad Syaryadhi, Eiko Nakazawa, Norio Tagawa and Ming Yang
Sensors 2024, 24(5), 1496; https://doi.org/10.3390/s24051496 - 25 Feb 2024
Viewed by 1180
Abstract
We are currently investigating the ultrasound imaging of a sensor that consists of a randomized encoding mask attached to a single lead zirconate titanate (PZT) oscillator for a puncture microscope application. The proposed model was conducted using a finite element method (FEM) simulator. [...] Read more.
We are currently investigating the ultrasound imaging of a sensor that consists of a randomized encoding mask attached to a single lead zirconate titanate (PZT) oscillator for a puncture microscope application. The proposed model was conducted using a finite element method (FEM) simulator. To increase the number of measurements required by a single element system that affects its resolution, the transducer was rotated at different angles. The image was constructed by solving a linear equation of the image model resulting in a poor quality. In a previous work, the phase information was extracted from the echo signal to improve the image quality. This study proposes a strategy by integrating the weighted frequency subbands compound and a super-resolution technique to enhance the resolution in range and lateral direction. The image performance with different methods was also evaluated using the experimental data. The results indicate that better image resolution and speckle suppression were obtained by applying the proposed method. Full article
(This article belongs to the Special Issue Advances in 3D Imaging and Multimodal Sensing Applications)
Show Figures

Figure 1

16 pages, 6404 KiB  
Article
3D Reverse-Time Migration Imaging for Multiple Cross-Hole Research and Multiple Sensor Settings of Cross-Hole Seismic Exploration
by Fei Cheng, Daicheng Peng and Sansheng Yang
Sensors 2024, 24(3), 815; https://doi.org/10.3390/s24030815 - 26 Jan 2024
Cited by 1 | Viewed by 902
Abstract
The two-dimensional (2D) cross-hole seismic computed tomography (CT) imaging acquisition method has the potential to characterize the target zone optimally compared to surface seismic surveys. It has wide applications in oil and gas exploration, engineering geology, etc. Limited to 2D hole velocity profiling, [...] Read more.
The two-dimensional (2D) cross-hole seismic computed tomography (CT) imaging acquisition method has the potential to characterize the target zone optimally compared to surface seismic surveys. It has wide applications in oil and gas exploration, engineering geology, etc. Limited to 2D hole velocity profiling, this method cannot acquire three-dimensional (3D) information on lateral geological structures outside the profile. Additionally, the sensor data received by cross-hole seismic exploration constitute responses from geological bodies in 3D space and are potentially affected by objects outside the well profiles, distorting the imaging results and geological interpretation. This paper proposes a 3D cross-hole acoustic wave reverse-time migration imaging method to capture 3D cross-hole geological structures using sensor settings in multi-cross-hole seismic research. Based on the analysis of resulting 3D cross-hole images under varying sensor settings, optimizing the observation system can aid in the cost-efficient obtainment of the 3D underground structure distribution. To verify this method’s effectiveness on 3D cross-hole structure imaging, numerical simulations were conducted on four typical geological models regarding layers, local high-velocity zones, large dip angles, and faults. The results verify the model’s superiority in providing more reliable and accurate 3D geological information for cross-hole seismic exploration, presenting a theoretical basis for processing and interpreting cross-hole data. Full article
(This article belongs to the Special Issue Advances in 3D Imaging and Multimodal Sensing Applications)
Show Figures

Figure 1

15 pages, 1710 KiB  
Article
Comparing Direct Measurements and Three-Dimensional (3D) Scans for Evaluating Facial Soft Tissue
by Boris Gašparović, Luka Morelato, Kristijan Lenac, Goran Mauša, Alexei Zhurov and Višnja Katić
Sensors 2023, 23(5), 2412; https://doi.org/10.3390/s23052412 - 22 Feb 2023
Cited by 9 | Viewed by 3237
Abstract
The inspection of patients’ soft tissues and the effects of various dental procedures on their facial physiognomy are quite challenging. To minimise discomfort and simplify the process of manual measuring, we performed facial scanning and computer measurement of experimentally determined demarcation lines. Images [...] Read more.
The inspection of patients’ soft tissues and the effects of various dental procedures on their facial physiognomy are quite challenging. To minimise discomfort and simplify the process of manual measuring, we performed facial scanning and computer measurement of experimentally determined demarcation lines. Images were acquired using a low-cost 3D scanner. Two consecutive scans were obtained from 39 participants, to test the scanner repeatability. An additional ten persons were scanned before and after forward movement of the mandible (predicted treatment outcome). Sensor technology that combines red, green, and blue (RGB) data with depth information (RGBD) integration was used for merging frames into a 3D object. For proper comparison, the resulting images were registered together, which was performed with ICP (Iterative Closest Point)-based techniques. Measurements on 3D images were performed using the exact distance algorithm. One operator measured the same demarcation lines directly on participants; repeatability was tested (intra-class correlations). The results showed that the 3D face scans were reproducible with high accuracy (mean difference between repeated scans <1%); the actual measurements were repeatable to some extent (excellent only for the tragus-pogonion demarcation line); computational measurements were accurate, repeatable, and comparable to the actual measurements. Three dimensional (3D) facial scans can be used as a faster, more comfortable for patients, and more accurate technique to detect and quantify changes in facial soft tissue resulting from various dental procedures. Full article
(This article belongs to the Special Issue Advances in 3D Imaging and Multimodal Sensing Applications)
Show Figures

Figure 1

Back to TopTop