Processing math: 100%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (34)

Search Parameters:
Keywords = ToF distance sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 17034 KiB  
Article
IoT-Enabled Real-Time Monitoring of Urban Garbage Levels Using Time-of-Flight Sensing Technology
by Luis Miguel Pires, João Figueiredo, Ricardo Martins and José Martins
Sensors 2025, 25(7), 2152; https://doi.org/10.3390/s25072152 - 28 Mar 2025
Viewed by 509
Abstract
This manuscript presents a real-time monitoring system for urban garbage levels using Time-of-Flight (ToF) sensing technology. The experiment employs the VL53L8CX sensor, which accurately measures distances, along with an ESP32-S3 microcontroller that enables IoT connectivity. The ToF-Node IoT system, consisting of the VL53L8CX [...] Read more.
This manuscript presents a real-time monitoring system for urban garbage levels using Time-of-Flight (ToF) sensing technology. The experiment employs the VL53L8CX sensor, which accurately measures distances, along with an ESP32-S3 microcontroller that enables IoT connectivity. The ToF-Node IoT system, consisting of the VL53L8CX sensor connected to the ESP32-S3, communicates with an IoT gateway (Raspberry Pi 3) via Wi-Fi, which then connects to an IoT cloud. The ToF-Node communicates with the IoT gateway using Wi-Fi, and after with the IoT cloud, also using Wi-Fi. This setup provides real-time data on waste container capacities, facilitating efficient waste collection management. By integrating sensor data and network communication, the system supports informed decision-making for optimizing collection logistics, contributing to cleaner and more sustainable cities. The ToF-Node was tested in four scenarios, with a PCB measuring 40 × 18 × 4 mm and an enclosure of 65 × 40 × 30 mm. We used an office trash box with a height of 250 mm (25 cm), and the ToF-Node was located on the top. Results demonstrate that the effectiveness of ToF technology in environmental monitoring and the potential of IoT to enhance urban services. For detailed monitoring, additional ToF sensors may be required. Data collected are displayed in the IoT cloud for better monitoring and can be viewed by level and volume. The ToF-Node and the IoT gateway have a combined power consumption of 153.8 mAh Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2024)
Show Figures

Figure 1

14 pages, 3783 KiB  
Article
Modeling and Estimation of the Pitch Angle for a Levitating Cart in a UAV Magnetic Catapult Under Stationary Conditions
by Edyta Ładyżyńska-Kozdraś, Bartosz Czaja, Sławomir Czubaj, Jan Tracz, Anna Sibilska-Mroziewicz and Leszek Baranowski
Electronics 2025, 14(1), 44; https://doi.org/10.3390/electronics14010044 - 26 Dec 2024
Viewed by 622
Abstract
The paper presents a method for modeling and estimating the orientation of a launch cart in the magnetic suspension system of an innovative UAV catapult. The catapult consists of stationary tracks lined with neodymium magnets, generating a trough-shaped magnetic field. The cart levitates [...] Read more.
The paper presents a method for modeling and estimating the orientation of a launch cart in the magnetic suspension system of an innovative UAV catapult. The catapult consists of stationary tracks lined with neodymium magnets, generating a trough-shaped magnetic field. The cart levitates above the tracks, supported by four containers housing high-temperature YBCO superconductors cooled with liquid nitrogen. The Meissner effect, characterized by the expulsion of magnetic fields from superconductors, ensures stable hovering of the cart. The main research challenge was to determine the cart’s orientation relative to the tracks, with a focus on the pitch angle, which is critical for collision-free operation and system efficiency. A dedicated measurement stand equipped with Hall sensors and Time-of-Flight (ToF) distance sensors was developed. Hall sensors mounted on the cart’s supports captured magnetic field data at specific points. To model the tracks, the CRISP-DM (Cross Industry Standard Process for Data Mining) methodology was employed—a structured framework consisting of six stages; from problem understanding and data preparation to model evaluation and deployment. This approach guided the analysis of data-driven models and facilitated accurate pitch angle estimation. Evaluation metrics, including mean squared error (MSE), were used to identify and select the optimal models. The final model achieved an MSE of 0.084°, demonstrating its effectiveness for precise orientation control. Full article
Show Figures

Figure 1

11 pages, 5108 KiB  
Article
A Low-Power Optoelectronic Receiver IC for Short-Range LiDAR Sensors in 180 nm CMOS
by Shinhae Choi, Yeojin Chon and Sung Min Park
Micromachines 2024, 15(9), 1066; https://doi.org/10.3390/mi15091066 - 23 Aug 2024
Cited by 1 | Viewed by 1389
Abstract
This paper presents a novel power-efficient topology for receivers in short-range LiDAR sensors. Conventionally, LiDAR sensors exploit complex time-to-digital converters (TDCs) for time-of-flight (ToF) distance measurements, thereby frequently leading to intricate circuit designs and persistent walk error issues. However, this work features a [...] Read more.
This paper presents a novel power-efficient topology for receivers in short-range LiDAR sensors. Conventionally, LiDAR sensors exploit complex time-to-digital converters (TDCs) for time-of-flight (ToF) distance measurements, thereby frequently leading to intricate circuit designs and persistent walk error issues. However, this work features a fully differential trans-impedance amplifier with on-chip avalanche photodiodes as optical detectors so that the need of the following post-amplifiers and output buffers can be eliminated, thus considerably reducing power consumption. Also, the combination of amplitude-to-voltage (A2V) and time-to-voltage (T2V) converters are exploited to replace the complicated TDC circuit. The A2V converter efficiently processes weak input photocurrents ranging from 1 to 50 μApp which corresponds to a maximum distance of 22.8 m, while the T2V converter handles relatively larger photocurrents from 40 μApp to 5.8 mApp for distances as short as 30 cm. The post-layout simulations confirm that the proposed LiDAR receiver can detect optical pulses over the range of 0.3 to 22.8 m with a low power dissipation of 10 mW from a single 1.8 V supply. This topology offers significant improvements in simplifying the receiver design and reducing the power consumption, providing a more efficient and accurate solution that is highly suitable for short-range LiDAR sensor applications. Full article
Show Figures

Figure 1

20 pages, 7423 KiB  
Article
Modelling and Analysis of Vector and Vector Vortex Beams Reflection for Optical Sensing
by Wangke Yu and Jize Yan
Photonics 2024, 11(8), 729; https://doi.org/10.3390/photonics11080729 - 4 Aug 2024
Viewed by 1703
Abstract
Light Detection and Ranging (LiDAR) sensors can precisely determine object distances using the pulsed time of flight (TOF) or amplitude-modulated continuous wave (AMCW) TOF methods and velocity using the frequency-modulated continuous wave (FMCW) approach. In this paper, we focus on modelling and analysing [...] Read more.
Light Detection and Ranging (LiDAR) sensors can precisely determine object distances using the pulsed time of flight (TOF) or amplitude-modulated continuous wave (AMCW) TOF methods and velocity using the frequency-modulated continuous wave (FMCW) approach. In this paper, we focus on modelling and analysing the reflection of vector beams (VBs) and vector vortex beams (VVBs) for optical sensing in LiDAR applications. Unlike traditional TOF and FMCW methods, this novel approach uses VBs and VVBs as detection signals to measure the orientation of reflecting surfaces. A key component of this sensing scheme is understanding the relationship between the characteristics of the reflected optical fields and the orientation of the reflecting surface. To this end, we develop a computational model for the reflection of VBs and VVBs. This model allows us to investigate critical aspects of the reflected field, such as intensity distribution, intensity centroid offset, reflectance, and the variation of the intensity range measured along the azimuthal direction. By thoroughly analysing these characteristics, we aim to enhance the functionality of LiDAR sensors in detecting the orientation of reflecting surfaces. Full article
(This article belongs to the Special Issue Optical Vortex: Fundamentals and Applications)
Show Figures

Figure 1

25 pages, 8864 KiB  
Article
A Real-Time and Privacy-Preserving Facial Expression Recognition System Using an AI-Powered Microcontroller
by Jiajin Zhang, Xiaolong Xie, Guoying Peng, Li Liu, Hongyu Yang, Rong Guo, Juntao Cao and Jianke Yang
Electronics 2024, 13(14), 2791; https://doi.org/10.3390/electronics13142791 - 16 Jul 2024
Cited by 1 | Viewed by 2975
Abstract
This study proposes an edge computing-based facial expression recognition system that is low cost, low power, and privacy preserving. It utilizes a minimally obtrusive cap-based system designed for the continuous and real-time monitoring of a user’s facial expressions. The proposed method focuses on [...] Read more.
This study proposes an edge computing-based facial expression recognition system that is low cost, low power, and privacy preserving. It utilizes a minimally obtrusive cap-based system designed for the continuous and real-time monitoring of a user’s facial expressions. The proposed method focuses on detecting facial skin deformations accompanying changes in facial expressions. A multi-zone time-of-flight (ToF) depth sensor VL53L5CX, featuring an 8 × 8 depth image, is integrated into the front brim of the cap to measure the distance between the sensor and the user’s facial skin surface. The distance values corresponding to seven universal facial expressions (neutral, happy, disgust, anger, surprise, fear, and sad) are transmitted to a low-power STM32F476 microcontroller (MCU) as an edge device for data preprocessing and facial expression classification tasks utilizing an on-device pre-trained deep learning model. Performance evaluation of the system is conducted through experiments utilizing data collected from 20 subjects. Four deep learning algorithms, including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, and Deep Neural Networks (DNN), are assessed. These algorithms demonstrate high accuracy, with CNN yielding the best result, achieving an accuracy of 89.20% at a frame rate of 15 frames per second (fps) and a maximum latency of 2 ms. Full article
Show Figures

Figure 1

14 pages, 6878 KiB  
Article
Enhancing ToF Sensor Precision Using 3D Models and Simulation for Vision Inspection in Industrial Mobile Robots
by Changmo Yang, Jiheon Kang and Doo-Seop Eom
Appl. Sci. 2024, 14(11), 4595; https://doi.org/10.3390/app14114595 - 27 May 2024
Cited by 2 | Viewed by 2136
Abstract
In recent industrial settings, time-of-flight (ToF) cameras have become essential tools in various applications. These cameras provide high-performance 3D measurements without relying on ambient lighting; however, their performance can degrade due to environmental factors such as temperature, humidity, and distance to the target. [...] Read more.
In recent industrial settings, time-of-flight (ToF) cameras have become essential tools in various applications. These cameras provide high-performance 3D measurements without relying on ambient lighting; however, their performance can degrade due to environmental factors such as temperature, humidity, and distance to the target. This study proposes a novel method to enhance the pixel-level sensing accuracy of ToF cameras by obtaining precise depth data labels in real-world environments. By synchronizing 3D simulations with the actual ToF sensor viewpoints, accurate depth values were acquired and utilized to train AI algorithms, thereby improving ToF depth accuracy. This method was validated in industrial environments such as automobile manufacturing, where the introduction of 3D vision systems improved inspection accuracy compared to traditional 2D systems. Additionally, it was confirmed that ToF depth data can be used to correct positional errors in mobile robot manipulators. Experimental results showed that AI-based preprocessing effectively reduced noise and increased the precision of depth data compared to conventional methods. Consequently, ToF camera performance was enhanced, expanding their potential applications in industrial robotics and automated quality inspection. Future research will focus on developing real-time synchronization technology between ToF sensor data and simulation environments, as well as expanding the AI training dataset to achieve even higher accuracy. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Engineering)
Show Figures

Figure 1

18 pages, 2824 KiB  
Article
Time of Flight Distance Sensor–Based Construction Equipment Activity Detection Method
by Young-Jun Park and Chang-Yong Yi
Appl. Sci. 2024, 14(7), 2859; https://doi.org/10.3390/app14072859 - 28 Mar 2024
Cited by 2 | Viewed by 1619
Abstract
In this study, we delve into a novel approach by employing a sensor-based pattern recognition model to address the automation of construction equipment activity analysis. The model integrates time of flight (ToF) sensors with deep convolutional neural networks (DCNNs) to accurately classify the [...] Read more.
In this study, we delve into a novel approach by employing a sensor-based pattern recognition model to address the automation of construction equipment activity analysis. The model integrates time of flight (ToF) sensors with deep convolutional neural networks (DCNNs) to accurately classify the operational activities of construction equipment, focusing on piston movements. The research utilized a one-twelfth-scale excavator model, processing the displacement ratios of its pistons into a unified dataset for analysis. Methodologically, the study outlines the setup of the sensor modules and their integration with a controller, emphasizing the precision in capturing equipment dynamics. The DCNN model, characterized by its four-layered convolutional blocks, was meticulously tuned within the MATLAB environment, demonstrating the model’s learning capabilities through hyperparameter optimization. An analysis of 2070 samples representing six distinct excavator activities yielded an impressive average precision of 95.51% and a recall of 95.31%, with an overall model accuracy of 95.19%. When compared against other vision-based and accelerometer-based methods, the proposed model showcases enhanced performance and reliability under controlled experimental conditions. This substantiates its potential for practical application in real-world construction scenarios, marking a significant advancement in the field of construction equipment monitoring. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

13 pages, 6394 KiB  
Article
Depth Quality Improvement with a 607 MHz Time-Compressive Computational Pseudo-dToF CMOS Image Sensor
by Anh Ngoc Pham, Thoriq Ibrahim, Keita Yasutomi, Shoji Kawahito, Hajime Nagahara and Keiichiro Kagawa
Sensors 2023, 23(23), 9332; https://doi.org/10.3390/s23239332 - 22 Nov 2023
Cited by 1 | Viewed by 2122
Abstract
In this paper, we present a prototype pseudo-direct time-of-flight (ToF) CMOS image sensor, achieving high distance accuracy, precision, and robustness to multipath interference. An indirect ToF (iToF)-based image sensor, which enables high spatial resolution, is used to acquire temporal compressed signals in the [...] Read more.
In this paper, we present a prototype pseudo-direct time-of-flight (ToF) CMOS image sensor, achieving high distance accuracy, precision, and robustness to multipath interference. An indirect ToF (iToF)-based image sensor, which enables high spatial resolution, is used to acquire temporal compressed signals in the charge domain. Whole received light waveforms, like those acquired with conventional direct ToF (dToF) image sensors, can be obtained after image reconstruction based on compressive sensing. Therefore, this method has the advantages of both dToF and iToF depth image sensors, such as high resolution, high accuracy, immunity to multipath interference, and the absence of motion artifacts. Additionally, two approaches to refine the depth resolution are explained: (1) the introduction of a sub-time window; and (2) oversampling in image reconstruction and quadratic fitting in the depth calculation. Experimental results show the separation of two reflections 40 cm apart under multipath interference conditions and a significant improvement in distance precision down to around 1 cm. Point cloud map videos demonstrate the improvements in depth resolution and accuracy. These results suggest that the proposed method could be a promising approach for virtually implementing dToF imaging suitable for challenging environments with multipath interference. Full article
Show Figures

Figure 1

29 pages, 19371 KiB  
Article
Sensor Fusion for the Robust Detection of Facial Regions of Neonates Using Neural Networks
by Johanna Gleichauf, Lukas Hennemann, Fabian B. Fahlbusch, Oliver Hofmann, Christine Niebler and Alexander Koelpin
Sensors 2023, 23(10), 4910; https://doi.org/10.3390/s23104910 - 19 May 2023
Cited by 1 | Viewed by 2045
Abstract
The monitoring of vital signs and increasing patient comfort are cornerstones of modern neonatal intensive care. Commonly used monitoring methods are based on skin contact which can cause irritations and discomfort in preterm neonates. Therefore, non-contact approaches are the subject of current research [...] Read more.
The monitoring of vital signs and increasing patient comfort are cornerstones of modern neonatal intensive care. Commonly used monitoring methods are based on skin contact which can cause irritations and discomfort in preterm neonates. Therefore, non-contact approaches are the subject of current research aiming to resolve this dichotomy. Robust neonatal face detection is essential for the reliable detection of heart rate, respiratory rate and body temperature. While solutions for adult face detection are established, the unique neonatal proportions require a tailored approach. Additionally, sufficient open-source data of neonates on the NICU is lacking. We set out to train neural networks with the thermal-RGB-fusion data of neonates. We propose a novel indirect fusion approach including the sensor fusion of a thermal and RGB camera based on a 3D time-of-flight (ToF) camera. Unlike other approaches, this method is tailored for close distances encountered in neonatal incubators. Two neural networks were used with the fusion data and compared to RGB and thermal networks. For the class “head” we reached average precision values of 0.9958 (RetinaNet) and 0.9455 (YOLOv3) for the fusion data. Compared with the literature, similar precision was achieved, but we are the first to train a neural network with fusion data of neonates. The advantage of this approach is in calculating the detection area directly from the fusion image for the RGB and thermal modality. This increases data efficiency by 66%. Our results will facilitate the future development of non-contact monitoring to further improve the standard of care for preterm neonates. Full article
(This article belongs to the Special Issue Sensor Data Fusion Analysis for Broad Applications)
Show Figures

Figure 1

12 pages, 1229 KiB  
Article
Plastic Classification Using Optical Parameter Features Measured with the TMF8801 Direct Time-of-Flight Depth Sensor
by Cienna N. Becker and Lucas J. Koerner
Sensors 2023, 23(6), 3324; https://doi.org/10.3390/s23063324 - 22 Mar 2023
Cited by 6 | Viewed by 2844 | Correction
Abstract
We demonstrate a methodology for non-contact classification of five different plastic types using an inexpensive direct time-of-flight (ToF) sensor, the AMS TMF8801, designed for consumer electronics. The direct ToF sensor measures the time for a brief pulse of light to return from the [...] Read more.
We demonstrate a methodology for non-contact classification of five different plastic types using an inexpensive direct time-of-flight (ToF) sensor, the AMS TMF8801, designed for consumer electronics. The direct ToF sensor measures the time for a brief pulse of light to return from the material with the intensity change and spatial and temporal spread of the returned light conveying information on the optical properties of the material. We use measured ToF histogram data of all five plastics, captured at a range of sensor to material distances, to train a classifier that achieves 96% accuracy on a test dataset. To extend the generality and provide insight into the classification process, we fit the ToF histogram data to a physics-based model that differentiates between surface scattering and subsurface scattering. Three optical parameters of the ratio of direct to subsurface intensity, the object distance, and the time constant of the subsurface exponential decay are used as features for a classifier that achieves 88% accuracy. Additional measurements at a fixed distance of 22.5 cm showed perfect classification and revealed that Poisson noise is not the most significant source of variation when measurements are taken over a range of object distances. In total, this work proposes optical parameters for material classification that are robust over object distance and measurable by miniature direct time-of-flight sensors designed for installation in smartphones. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

14 pages, 5734 KiB  
Article
Real-Time Finger-Writing Character Recognition via ToF Sensors on Edge Deep Learning
by Jiajin Zhang, Guoying Peng, Hongyu Yang, Chao Tan, Yaqing Tan and Hui Bai
Electronics 2023, 12(3), 685; https://doi.org/10.3390/electronics12030685 - 30 Jan 2023
Cited by 6 | Viewed by 3526
Abstract
Human–computer interaction is demanded for natural and convenient approaches, in which finger-writing recognition has aroused more and more attention. In this paper, a device-free finger-writing character recognition system based on an array of time-of-flight (ToF) distance sensors is presented. The ToF sensors acquire [...] Read more.
Human–computer interaction is demanded for natural and convenient approaches, in which finger-writing recognition has aroused more and more attention. In this paper, a device-free finger-writing character recognition system based on an array of time-of-flight (ToF) distance sensors is presented. The ToF sensors acquire distance values between sensors to a writing finger within a 9.5 × 15 cm square on a surface at specific time intervals and send distance data to a low-power microcontroller STM32F401, equipped with deep learning algorithms for real-time inference and recognition tasks. The proposed method enables one to distinguish 26 English lower-case letters by users writing with their fingers and does not require one to wear additional devices. All data used in this work were collected from 21 subjects (12 males and 9 females) to evaluate the proposed system in a real scenario. In this work, the performance of different deep learning algorithms, such as long short-term memory (LSTM), convolutional neural networks (CNNs) and bidirectional LSTM (BiLSTM), was evaluated. Thus, these algorithms provide high accuracy, where the best result is extracted from the LSTM, with 98.31% accuracy and 50 ms of maximum latency. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

21 pages, 4880 KiB  
Article
Effective Motion Sensors and Deep Learning Techniques for Unmanned Ground Vehicle (UGV)-Based Automated Pavement Layer Change Detection in Road Construction
by Tirth Patel, Brian H. W. Guo, Jacobus Daniel van der Walt and Yang Zou
Buildings 2023, 13(1), 5; https://doi.org/10.3390/buildings13010005 - 20 Dec 2022
Cited by 11 | Viewed by 3682
Abstract
As-built progress of the constructed pavement should be monitored effectively to provide prompt project control. However, current pavement construction progress monitoring practices (e.g., data collection, processing, and analysis) are typically manual, time-consuming, tedious, and error-prone. To address this, this study proposes sensors mounted [...] Read more.
As-built progress of the constructed pavement should be monitored effectively to provide prompt project control. However, current pavement construction progress monitoring practices (e.g., data collection, processing, and analysis) are typically manual, time-consuming, tedious, and error-prone. To address this, this study proposes sensors mounted using a UGV-based methodology to develop a pavement layer change classifier measuring pavement construction progress automatically. Initially, data were collected using the UGV equipped with a laser ToF (time-of-flight) distance sensor, accelerometer, gyroscope, and GPS sensor in a controlled environment by constructing various scenarios of pavement layer change. Subsequently, four Long Short-Term Memory network variants (LSTMs) (LSTM, BiLSTM, CNN-LSTM, and ConvLSTM) were implemented on collected sensor data combinations for developing pavement layer change classifiers. The authors conducted the experiment to select the best sensor combinations for feature detection of the layer change classifier model. Subsequently, individual performance measures of each class with learning curves and confusion matrices were generated using sensor combination data to find out the best algorithm among all implemented algorithms. The experimental result demonstrates the (az + gx + D) sensor combination as the best feature detector with high-performance measures (accuracy, precision, recall, and F1 score). The result also confirms the ConvLSTM as the best algorithm with the highest overall accuracy of 97.88% with (az + gx + D) sensor combination data. The high-performance measures with the proposed approach confirm the feasibility of detecting pavement layer changes in real pavement construction projects. This proposed approach can potentially improve the efficiency of road construction progress measurement. This research study is a stepping stone for automated road construction progress monitoring. Full article
(This article belongs to the Special Issue Construction Automation: Current and Future)
Show Figures

Figure 1

18 pages, 6608 KiB  
Article
Fatigue Effect on Minimal Toe Clearance and Toe Activity during Walking
by Yingjie Jin, Yui Sano, Miho Shogenji and Tetsuyou Watanabe
Sensors 2022, 22(23), 9300; https://doi.org/10.3390/s22239300 - 29 Nov 2022
Cited by 3 | Viewed by 2096
Abstract
This study investigates the effects of fatigue on the process of walking in young adults using the developed clog-integrated sensor system. The developed sensor can simultaneously measure the forefoot activity (FA) and minimum toe clearance (MTC). The FA was evaluated through the change [...] Read more.
This study investigates the effects of fatigue on the process of walking in young adults using the developed clog-integrated sensor system. The developed sensor can simultaneously measure the forefoot activity (FA) and minimum toe clearance (MTC). The FA was evaluated through the change in the contact area captured by a camera using a method based on a light conductive plate. The MTC was derived from the distance between the bottom surface of the clog and ground obtained using a time of flight (TOF) sensor, and the clog posture was obtained using an acceleration sensor. The induced fatigue was achieved by walking on a treadmill at the fastest walking speed. We evaluated the FA and MTC before and after fatigue in both feet for 14 participants. The effects of fatigue manifested in either the FA or MTC of either foot when the results were evaluated by considering the participants individually, although individual variances in the effects of fatigue were observed. In the dominant foot, a significant increase in either the FA or MTC was observed in 13 of the 14 participants. The mean MTC in the dominant foot increased significantly (p = 0.038) when the results were evaluated by considering the participants as a group. Full article
Show Figures

Figure 1

17 pages, 22561 KiB  
Article
Autonomous Visual Navigation for a Flower Pollination Drone
by Dries Hulens, Wiebe Van Ranst, Ying Cao and Toon Goedemé
Machines 2022, 10(5), 364; https://doi.org/10.3390/machines10050364 - 10 May 2022
Cited by 19 | Viewed by 4865
Abstract
In this paper, we present the development of a visual navigation capability for a small drone enabling it to autonomously approach flowers. This is a very important step towards the development of a fully autonomous flower pollinating nanodrone. The drone we developed is [...] Read more.
In this paper, we present the development of a visual navigation capability for a small drone enabling it to autonomously approach flowers. This is a very important step towards the development of a fully autonomous flower pollinating nanodrone. The drone we developed is totally autonomous and relies for its navigation on a small on-board color camera, complemented with one simple ToF distance sensor, to detect and approach the flower. The proposed solution uses a DJI Tello drone carrying a Maix Bit processing board capable of running all deep-learning-based image processing and navigation algorithms on-board. We developed a two-stage visual servoing algorithm that first uses a highly optimized object detection CNN to localize the flowers and fly towards it. The second phase, approaching the flower, is implemented by a direct visual steering CNN. This enables the drone to detect any flower in the neighborhood, steer the drone towards the flower and make the drone’s pollinating rod touch the flower. We trained all deep learning models based on an artificial dataset with a mix of images of real flowers, artificial (synthetic) flowers and virtually rendered flowers. Our experiments demonstrate that the approach is technically feasible. The drone is able to detect, approach and touch the flowers totally autonomously. Our 10 cm sized prototype is trained on sunflowers, but the methodology presented in this paper can be retrained for any flower type. Full article
Show Figures

Figure 1

12 pages, 1586 KiB  
Article
Can Liquid Lenses Increase Depth of Field in Head Mounted Video See-Through Devices?
by Marina Carbone, Davide Domeneghetti, Fabrizio Cutolo, Renzo D’Amato, Emanuele Cigna, Paolo Domenico Parchi, Marco Gesi, Luca Morelli, Mauro Ferrari and Vincenzo Ferrari
J. Imaging 2021, 7(8), 138; https://doi.org/10.3390/jimaging7080138 - 5 Aug 2021
Cited by 4 | Viewed by 2884
Abstract
Wearable Video See-Through (VST) devices for Augmented Reality (AR) and for obtaining a Magnified View are taking hold in the medical and surgical fields. However, these devices are not yet usable in daily clinical practice, due to focusing problems and a limited depth [...] Read more.
Wearable Video See-Through (VST) devices for Augmented Reality (AR) and for obtaining a Magnified View are taking hold in the medical and surgical fields. However, these devices are not yet usable in daily clinical practice, due to focusing problems and a limited depth of field. This study investigates the use of liquid-lens optics to create an autofocus system for wearable VST visors. The autofocus system is based on a Time of Flight (TOF) distance sensor and an active autofocus control system. The integrated autofocus system in the wearable VST viewers showed good potential in terms of providing rapid focus at various distances and a magnified view. Full article
Show Figures

Figure 1

Back to TopTop