Next Issue
Volume 3, September
Previous Issue
Volume 3, March
 
 

J. Imaging, Volume 3, Issue 2 (June 2017) – 10 articles

Cover Story (view full-size image): Measuring the 3D shape of plants for phenotyping purposes using active 3D laser scanning devices has become an important field of research. The purpose of this paper is to examine whether the leaf thickness is measurable using a high precision industrial laser scanning system. The results indicate that although the measuring system is principally able to measure thicknesses of about 74 µm with statistical certainty, the leaf thickness is not measurable accurately due to the penetration of the laser beam and systematic errors due to the scanning incidence angles. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31238 KiB  
Article
Measuring Leaf Thickness with 3D Close-Up Laser Scanners: Possible or Not?
by Jan Dupuis, Christoph Holst and Heiner Kuhlmann
J. Imaging 2017, 3(2), 22; https://doi.org/10.3390/jimaging3020022 - 15 Jun 2017
Cited by 5 | Viewed by 8013
Abstract
Measuring the 3D shape of plants for phenotyping purposes using active 3D laser scanning devices has become an important field of research. While the acquisition of stem and root structure is mostly straightforward, extensive and non-invasive measuring of the volumetric shape of leaves, [...] Read more.
Measuring the 3D shape of plants for phenotyping purposes using active 3D laser scanning devices has become an important field of research. While the acquisition of stem and root structure is mostly straightforward, extensive and non-invasive measuring of the volumetric shape of leaves, i.e., the leaf thickness, is more challenging. Therefore, the purpose of this paper is to examine whether the leaf thickness is measurable using a high precision industrial laser scanning system. The study comprises a metrological investigation of the accuracy of the laser scanning system with regards to thickness measurements as well as experiments for leaf thickness measurements using several leaves of three different types of crop. The results indicate that although the measuring system is principally able to measure thicknesses of about 74 μ m with statistical certainty, the leaf thickness is not measurable accurately. The reason for this can be attributed to the measurable penetration depth of the laser scanner combined with the variation of the angle of incidence. These effects cause systematic uncertainties and significant variations of the derived leaf thickness. Full article
(This article belongs to the Special Issue 2D, 3D and 4D Imaging for Plant Phenotyping)
Show Figures

Figure 1

1461 KiB  
Article
Object Recognition in Aerial Images Using Convolutional Neural Networks
by Matija Radovic, Offei Adarkwa and Qiaosong Wang
J. Imaging 2017, 3(2), 21; https://doi.org/10.3390/jimaging3020021 - 14 Jun 2017
Cited by 140 | Viewed by 14589
Abstract
There are numerous applications of unmanned aerial vehicles (UAVs) in the management of civil infrastructure assets. A few examples include routine bridge inspections, disaster management, power line surveillance and traffic surveying. As UAV applications become widespread, increased levels of autonomy and independent decision-making [...] Read more.
There are numerous applications of unmanned aerial vehicles (UAVs) in the management of civil infrastructure assets. A few examples include routine bridge inspections, disaster management, power line surveillance and traffic surveying. As UAV applications become widespread, increased levels of autonomy and independent decision-making are necessary to improve the safety, efficiency, and accuracy of the devices. This paper details the procedure and parameters used for the training of convolutional neural networks (CNNs) on a set of aerial images for efficient and automated object recognition. Potential application areas in the transportation field are also highlighted. The accuracy and reliability of CNNs depend on the network’s training and the selection of operational parameters. This paper details the CNN training procedure and parameter selection. The object recognition results show that by selecting a proper set of parameters, a CNN can detect and classify objects with a high level of accuracy (97.5%) and computational efficiency. Furthermore, using a convolutional neural network implemented in the “YOLO” (“You Only Look Once”) platform, objects can be tracked, detected (“seen”), and classified (“comprehended”) from video feeds supplied by UAVs in real-time. Full article
Show Figures

Graphical abstract

7889 KiB  
Article
Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform
by Sanjay Singh, Atanendu Sekhar Mandal, Chandra Shekhar and Anil Vohra
J. Imaging 2017, 3(2), 20; https://doi.org/10.3390/jimaging3020020 - 07 Jun 2017
Cited by 3 | Viewed by 5897
Abstract
Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational [...] Read more.
Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T) FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds) in real-time for standard PAL (720 × 576) size color video. Full article
Show Figures

Figure 1

11384 KiB  
Article
A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens
by Cristina Portalés, Sergio Casas, Inmaculada Coma and Marcos Fernández
J. Imaging 2017, 3(2), 19; https://doi.org/10.3390/jimaging3020019 - 03 Jun 2017
Cited by 3 | Viewed by 6549
Abstract
The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies [...] Read more.
The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system—which is currently installed in a shopping mall in Spain—has been tested by different users. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Graphical abstract

4965 KiB  
Article
Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems
by Sanjay Singh, Chandra Shekhar and Anil Vohra
J. Imaging 2017, 3(2), 18; https://doi.org/10.3390/jimaging3020018 - 31 May 2017
Cited by 10 | Viewed by 9157
Abstract
The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation [...] Read more.
The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt) in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T) FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576) resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object. Full article
Show Figures

Figure 1

6468 KiB  
Article
Depth Estimation for Lytro Images by Adaptive Window Matching on EPI
by Pei-Hsuan Lin, Jeng-Sheng Yeh, Fu-Che Wu and Yung-Yu Chuang
J. Imaging 2017, 3(2), 17; https://doi.org/10.3390/jimaging3020017 - 21 May 2017
Cited by 9 | Viewed by 6291
Abstract
A depth estimation algorithm from plenoptic images is presented. There are two stages to estimate the depth. First is the initial estimation base on the epipolar plane images (EPIs). Second is the refinement of the estimations. At the initial estimation, adaptive window matching [...] Read more.
A depth estimation algorithm from plenoptic images is presented. There are two stages to estimate the depth. First is the initial estimation base on the epipolar plane images (EPIs). Second is the refinement of the estimations. At the initial estimation, adaptive window matching is used to improve the robustness. The size of the matching window is based on the texture description of the sample patch. Based on the texture entropy, a smaller window is used for a fine texture. A smooth texture requires a larger window. With the adaptive window size, different reference patches based on various depth are constructed. Then the depth estimation compares the similarity among those patches to find the best matching patch. To improve the initial estimation, a refinement algorithm based on the Markov Random Field (MRF) optimization is used. An energy function keeps the data similar to the original estimation, and then the data are smoothed by minimizing the second derivative. Depth values should satisfy consistency across multiple views. Full article
(This article belongs to the Special Issue 3D Imaging)
Show Figures

Figure 1

6963 KiB  
Article
Tangent-Based Binary Image Abstraction
by Yi-Ta Wu, Jeng-Sheng Yeh, Fu-Che Wu and Yung-Yu Chuang
J. Imaging 2017, 3(2), 16; https://doi.org/10.3390/jimaging3020016 - 26 Apr 2017
Cited by 6 | Viewed by 7012
Abstract
We present a tangent flow-based image abstraction framework that turns a color or grayscale image into a two-tone image, which contains only black and white color, but retains sufficient information such that the viewer can still recognize the main content of the original [...] Read more.
We present a tangent flow-based image abstraction framework that turns a color or grayscale image into a two-tone image, which contains only black and white color, but retains sufficient information such that the viewer can still recognize the main content of the original image by observing the output of a two-tone image. Usually, the relevant visual information is presented by the edge. Thus, the algorithm will enhance the edge content first by the tangent flow to preserve its detail. We use filters to smooth the input image to reduce the contrast in low contrast regions and enhance some important features. Then we turn the image into a two-tone image as output. We can reduce the size of the image significantly but retain sufficient information to capture the main point of the image through our method. At the end of our process, we provide a smoothing step to smooth the two-tone image. Through this step, we can get an artistic black-and-white image. Full article
Show Figures

Figure 1

6276 KiB  
Article
3D Reconstructions Using Unstabilized Video Footage from an Unmanned Aerial Vehicle
by Jonathan Byrne, Evan O'Keeffe, Donal Lennon and Debra F. Laefer
J. Imaging 2017, 3(2), 15; https://doi.org/10.3390/jimaging3020015 - 22 Apr 2017
Cited by 24 | Viewed by 7033
Abstract
Structure from motion (SFM) is a methodology for automatically reconstructing three-dimensional (3D) models from a series of two-dimensional (2D) images when there is no a priori knowledge of the camera location and direction. Modern unmanned aerial vehicles (UAV) now provide a low-cost means [...] Read more.
Structure from motion (SFM) is a methodology for automatically reconstructing three-dimensional (3D) models from a series of two-dimensional (2D) images when there is no a priori knowledge of the camera location and direction. Modern unmanned aerial vehicles (UAV) now provide a low-cost means of obtaining aerial video footage of a point of interest. Unfortunately, raw video lacks the required information for SFM software, as it does not record exchangeable image file (EXIF) information for the frames. In this work, a solution is presented to modify aerial video so that it can be used for photogrammetry. The paper then examines how the field of view effects the quality of the reconstruction. The input is unstabilized, and distorted video footage obtained from a low-cost UAV which is then combined with an open-source SFM system to reconstruct a 3D model. This approach creates a high quality reconstruction by reducing the amount of unknown variables, such as focal length and sensor size, while increasing the data density. The experiments conducted examine the optical field of view settings to provide sufficient overlap without sacrificing image quality or exacerbating distortion. The system costs less than e1000, and the results show the ability to reproduce 3D models that are of centimeter-level accuracy. For verification, the results were compared against millimeter-level accurate models derived from laser scanning. Full article
(This article belongs to the Special Issue Big Visual Data Processing and Analytics)
Show Figures

Figure 1

7812 KiB  
Article
A Novel Vision-Based Classification System for Explosion Phenomena
by Sumaya Abusaleh, Ausif Mahmood, Khaled Elleithy and Sarosh Patel
J. Imaging 2017, 3(2), 14; https://doi.org/10.3390/jimaging3020014 - 15 Apr 2017
Cited by 3 | Viewed by 5259
Abstract
The need for a proper design and implementation of adequate surveillance system for detecting and categorizing explosion phenomena is nowadays rising as a part of the development planning for risk reduction processes including mitigation and preparedness. In this context, we introduce state-of-the-art explosions [...] Read more.
The need for a proper design and implementation of adequate surveillance system for detecting and categorizing explosion phenomena is nowadays rising as a part of the development planning for risk reduction processes including mitigation and preparedness. In this context, we introduce state-of-the-art explosions classification using pattern recognition techniques. Consequently, we define seven patterns for some of explosion and non-explosion phenomena including: pyroclastic density currents, lava fountains, lava and tephra fallout, nuclear explosions, wildfires, fireworks, and sky clouds. Towards the classification goal, we collected a new dataset of 5327 2D RGB images that are used for training the classifier. Furthermore, in order to achieve high reliability in the proposed explosion classification system and to provide multiple analysis for the monitored phenomena, we propose employing multiple approaches for feature extraction on images including texture features, features in the spatial domain, and features in the transform domain. Texture features are measured on intensity levels using the Principal Component Analysis (PCA) algorithm to obtain the highest 100 eigenvectors and eigenvalues. Moreover, features in the spatial domain are calculated using amplitude features such as the YCbCr color model; then, PCA is used to reduce vectors’ dimensionality to 100 features. Lastly, features in the transform domain are calculated using Radix-2 Fast Fourier Transform (Radix-2 FFT), and PCA is then employed to extract the highest 100 eigenvectors. In addition, these textures, amplitude and frequency features are combined in an input vector of length 300 which provides a valuable insight into the images under consideration. Accordingly, these features are fed into a combiner to map the input frames to the desired outputs and divide the space into regions or categories. Thus, we propose to employ one-against-one multi-class degree-3 polynomial kernel Support Vector Machine (SVM). The efficiency of the proposed research methodology was evaluated on a totality of 980 frames that were retrieved from multiple YouTube videos. These videos were taken in real outdoor environments for the seven scenarios of the respective defined classes. As a result, we obtained an accuracy of 94.08%, and the total time for categorizing one frame was approximately 0.12 s. Full article
Show Figures

Figure 1

3693 KiB  
Review
An Overview of Infrared Remote Sensing of Volcanic Activity
by Matthew Blackett
J. Imaging 2017, 3(2), 13; https://doi.org/10.3390/jimaging3020013 - 12 Apr 2017
Cited by 57 | Viewed by 11363
Abstract
Volcanic activity consists of the transfer of heat from the interior of the Earth to the surface. The characteristics of the heat emitted relate directly to the geological processes underway and can be observed from space, using the thermal sensors present on many [...] Read more.
Volcanic activity consists of the transfer of heat from the interior of the Earth to the surface. The characteristics of the heat emitted relate directly to the geological processes underway and can be observed from space, using the thermal sensors present on many Earth-orbiting satellites. For over 50 years, scientists have utilised such sensors and are now able to determine the sort of volcanic activity being displayed without hazardous and costly field expeditions. This review will describe the theoretical basis of the discipline and then discuss the sensors available and the history of their use. Challenges and opportunities for future developments are then discussed. Full article
(This article belongs to the Special Issue The World in Infrared Imaging)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop