Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (19)

Search Parameters:
Keywords = Harris corner detector

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 8609 KiB  
Article
Range Image-Aided Edge Line Estimation for Dimensional Inspection of Precast Bridge Slab Using Point Cloud Data
by Fangxin Li, Julian Pratama Putra Thedja, Sung-Han Sim, Joon-Oh Seo and Min-Koo Kim
Sustainability 2023, 15(16), 12243; https://doi.org/10.3390/su151612243 - 10 Aug 2023
Cited by 5 | Viewed by 1908
Abstract
The accurate estimation of edge lines in precast bridge slabs based on laser scanning is crucial for a geometrical quality inspection. Normally, the as-designed model of precast slabs is used to match with laser scan data to estimate the edge lines. However, this [...] Read more.
The accurate estimation of edge lines in precast bridge slabs based on laser scanning is crucial for a geometrical quality inspection. Normally, the as-designed model of precast slabs is used to match with laser scan data to estimate the edge lines. However, this approach often leads to an inaccurate quality measurement because the actually produced slab can be dimensionally different from the as-designed model or the inexistence of the as-designed model. In order to overcome this limitation, this study proposes a novel algorithm that generates and utilizes range images generated from scan points to enhance accuracy. The proposed algorithm operates as follows: first, the scan points are transformed into a range of images, and the corner points of these range images are extracted using a Harris corner detector. Next, the dimensions of the precast bridge slab are computed based on the extracted corner points. Consequently, the extracted corner points from the range images serve as an input for edge line estimation, thereby eliminating the matching errors that could arise when aligning collected scan points to an as-designed model. To evaluate the feasibility of the proposed edge estimation algorithm, a series of tests were conducted on both lab-scale specimens and field-scale precast slabs. The results showed promising accuracy levels of 1.22 mm for lab-scale specimens and 3.10 mm for field-scale precast bridge slabs, demonstrating more accurate edge line estimation results compared to traditional methods. These findings highlight the feasibility of employing the proposed image-aided geometrical inspection method, demonstrating the great potential for application in both small-scale and full-scale prefabricated construction elements within the construction industry, particularly during the fabrication stage. Full article
(This article belongs to the Special Issue Prefabrication and Modularized Construction)
Show Figures

Figure 1

15 pages, 7491 KiB  
Article
DTFS-eHarris: A High Accuracy Asynchronous Corner Detector for Event Cameras in Complex Scenes
by Jinxiu Zhao, Li Su, Xiangyu Wang, Jinjian Li, Fan Yang, Na Jiang and Quan Hu
Appl. Sci. 2023, 13(9), 5761; https://doi.org/10.3390/app13095761 - 7 May 2023
Cited by 4 | Viewed by 2192
Abstract
The event camera, a new bio-inspired vision sensor with low latency and high temporal resolution, has brought great potential and demonstrated a promising application in machine vision and artificial intelligence. Corner detection is a key step of object motion estimation and tracking. However, [...] Read more.
The event camera, a new bio-inspired vision sensor with low latency and high temporal resolution, has brought great potential and demonstrated a promising application in machine vision and artificial intelligence. Corner detection is a key step of object motion estimation and tracking. However, most existing event-based corner detectors, such as G-eHarris and Arc*, lead to a huge number of redundant or wrong corners, and cannot strike a balance between the accuracy and real-time performance, especially in complex scenes with high texture that require higher computational costs. To address these issues, we propose an asynchronous corner detection method: a double threshold filter with Sigmoid eHarris (DTFS-eHarris) and an asynchronous corner tracker. The main contributions are that a double threshold filter is designed to reduce the redundant events and the improved Sigmoid function is utilized to represent the Surface of Active Events (Sigmoid*-SAE). We selected four scenes—shapes, dynamic, poster and boxes—from the public event camera dataset DAVIS240C to compare with the existing state-of-the-art hybrid method; our method has shown more than a 10% reduction in false positive rate and a 5% and 20% improvement in accuracy and throughput, respectively. The evaluations indicate that DTFS-eHarris shows a significant improvement, especially in complex scenes. Thus, it is anticipated to enhance the real-time performance and feature detection accuracy for future robotic applications. Full article
Show Figures

Figure 1

18 pages, 2642 KiB  
Article
Tool Wear Monitoring Using Improved Dragonfly Optimization Algorithm and Deep Belief Network
by Leo Gertrude David, Raj Kumar Patra, Przemysław Falkowski-Gilski, Parameshachari Bidare Divakarachari and Lourdusamy Jegan Antony Marcilin
Appl. Sci. 2022, 12(16), 8130; https://doi.org/10.3390/app12168130 - 14 Aug 2022
Cited by 18 | Viewed by 2402
Abstract
In recent decades, tool wear monitoring has played a crucial role in the improvement of industrial production quality and efficiency. In the machining process, it is important to predict both tool cost and life, and to reduce the equipment downtime. The conventional methods [...] Read more.
In recent decades, tool wear monitoring has played a crucial role in the improvement of industrial production quality and efficiency. In the machining process, it is important to predict both tool cost and life, and to reduce the equipment downtime. The conventional methods need enormous quantities of human resources and expert skills to achieve precise tool wear information. To automatically identify the tool wear types, deep learning models are extensively used in the existing studies. In this manuscript, a new model is proposed for the effective classification of both serviceable and worn cutting edges. Initially, a dataset is chosen for experimental analysis that includes 254 images of edge profile cutting heads; then, circular Hough transform, canny edge detector, and standard Hough transform are used to segment 577 cutting edge images, where 276 images are disposable and 301 are functional. Furthermore, feature extraction is carried out on the segmented images utilizing Local Binary Pattern (LBPs) and Speeded up Robust Features (SURF), Harris Corner Detection (HCD), Histogram of Oriented Gradients (HOG), and Grey-Level Co-occurrence Matrix (GLCM) feature descriptors for extracting the texture feature vectors. Next, the dimension of the extracted features is reduced by an Improved Dragonfly Optimization Algorithm (IDOA) that lowers the computational complexity and running time of the Deep Belief Network (DBN), while classifying the serviceable and worn cutting edges. The experimental evaluations showed that the IDOA-DBN model attained 98.83% accuracy on the patch configuration of full edge division, which is superior to the existing deep learning models. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

22 pages, 8875 KiB  
Article
Cognitive IoT Vision System Using Weighted Guided Harris Corner Feature Detector for Visually Impaired People
by Manoranjitham Rajendran, Punitha Stephan, Thompson Stephan, Saurabh Agarwal and Hyunsung Kim
Sustainability 2022, 14(15), 9063; https://doi.org/10.3390/su14159063 - 24 Jul 2022
Cited by 1 | Viewed by 2141
Abstract
India has an estimated 12 million visually impaired people and is home to the world’s largest number in any country. Smart walking stick devices use various technologies including machine vision and different sensors for improving the safe movement of visually impaired persons. In [...] Read more.
India has an estimated 12 million visually impaired people and is home to the world’s largest number in any country. Smart walking stick devices use various technologies including machine vision and different sensors for improving the safe movement of visually impaired persons. In machine vision, accurately recognizing an object that is near to them is still a challenging task. This paper provides a system to enable safe navigation and guidance for visually impaired people by implementing an object recognition module in the smart walking stick that uses a local feature extraction method to recognize an object under different image transformations. To provide stability and robustness, the Weighted Guided Harris Corner Feature Detector (WGHCFD) method is proposed to extract feature points from the image. WGHCFD discriminates image features competently and is suitable for different real-world conditions. The WGHCFD method evaluates the most popular Oxford benchmark datasets, and it achieves greater repeatability and matching score than existing feature detectors. In addition, the proposed WGHCFD method is tested with a smart stick and achieves 99.8% recognition rate under different transformation conditions for the safe navigation of visually impaired people. Full article
Show Figures

Figure 1

22 pages, 2845 KiB  
Article
Localization and Edge-Based Segmentation of Lumbar Spine Vertebrae to Identify the Deformities Using Deep Learning Models
by Malaika Mushtaq, Muhammad Usman Akram, Norah Saleh Alghamdi, Joddat Fatima and Rao Farhat Masood
Sensors 2022, 22(4), 1547; https://doi.org/10.3390/s22041547 - 17 Feb 2022
Cited by 49 | Viewed by 8601
Abstract
The lumbar spine plays a very important role in our load transfer and mobility. Vertebrae localization and segmentation are useful in detecting spinal deformities and fractures. Understanding of automated medical imagery is of main importance to help doctors in handling the time-consuming manual [...] Read more.
The lumbar spine plays a very important role in our load transfer and mobility. Vertebrae localization and segmentation are useful in detecting spinal deformities and fractures. Understanding of automated medical imagery is of main importance to help doctors in handling the time-consuming manual or semi-manual diagnosis. Our paper presents the methods that will help clinicians to grade the severity of the disease with confidence, as the current manual diagnosis by different doctors has dissimilarity and variations in the analysis of diseases. In this paper we discuss the lumbar spine localization and segmentation which help for the analysis of lumbar spine deformities. The lumber spine is localized using YOLOv5 which is the fifth variant of the YOLO family. It is the fastest and the lightest object detector. Mean average precision (mAP) of 0.975 is achieved by YOLOv5. To diagnose the lumbar lordosis, we correlated the angles with region area that is computed from the YOLOv5 centroids and obtained 74.5% accuracy. Cropped images from YOLOv5 bounding boxes are passed through HED U-Net, which is a combination of segmentation and edge detection frameworks, to obtain the segmented vertebrae and its edges. Lumbar lordortic angles (LLAs) and lumbosacral angles (LSAs) are found after detecting the corners of vertebrae using a Harris corner detector with very small mean errors of 0.29° and 0.38°, respectively. This paper compares the different object detectors used to localize the vertebrae, the results of two methods used to diagnose the lumbar deformity, and the results with other researchers. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning in Image Sensing)
Show Figures

Figure 1

16 pages, 1057 KiB  
Article
A Fingerprint-Based Verification Framework Using Harris and SURF Feature Detection Algorithms
by Samy Bakheet, Ayoub Al-Hamadi and Rehab Youssef
Appl. Sci. 2022, 12(4), 2028; https://doi.org/10.3390/app12042028 - 15 Feb 2022
Cited by 29 | Viewed by 7276
Abstract
Amongst all biometric-based personal authentication systems, a fingerprint that gives each person a unique identity is the most commonly used parameter for personal identification. In this paper, we present an automatic fingerprint-based authentication framework by means of fingerprint enhancement, feature extraction, and matching [...] Read more.
Amongst all biometric-based personal authentication systems, a fingerprint that gives each person a unique identity is the most commonly used parameter for personal identification. In this paper, we present an automatic fingerprint-based authentication framework by means of fingerprint enhancement, feature extraction, and matching techniques. Initially, a variant of adaptive histogram equalization called CLAHE (contrast limited adaptive histogram equalization) along with a combination of FFT (fast Fourier transform), and Gabor filters are applied to enhance the contrast of fingerprint images. The fingerprint is then authenticated by picking a small amount of information from some local interest points called minutiae point features. These features are extracted from the thinned binary fingerprint image with a hybrid combination of Harris and SURF feature detectors to render significantly improved detection results. For fingerprint matching, the Euclidean distance between the corresponding Harris-SURF feature vectors of two feature points is used as a feature matching similarity measure of two fingerprint images. Moreover, an iterative algorithm called RANSAC (RANdom SAmple Consensus) is applied for fine matching and to automatically eliminate false matches and incorrect match points. Quantitative experimental results achieved on FVC2002 DB1 and FVC2000 DB1 public domain fingerprint databases demonstrate the good performance and feasibility of the proposed framework in terms of achieving average recognition rates of 95% and 92.5% for FVC2002 DB1 and FVC2000 DB1 databases, respectively. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

20 pages, 6555 KiB  
Article
A Modified HSIFT Descriptor for Medical Image Classification of Anatomy Objects
by Sumeer Ahmad Khan, Yonis Gulzar, Sherzod Turaev and Young Suet Peng
Symmetry 2021, 13(11), 1987; https://doi.org/10.3390/sym13111987 - 20 Oct 2021
Cited by 34 | Viewed by 2398
Abstract
Modeling low level features to high level semantics in medical imaging is an important aspect in filtering anatomy objects. Bag of Visual Words (BOVW) representations have been proven effective to model these low level features to mid level representations. Convolutional neural nets are [...] Read more.
Modeling low level features to high level semantics in medical imaging is an important aspect in filtering anatomy objects. Bag of Visual Words (BOVW) representations have been proven effective to model these low level features to mid level representations. Convolutional neural nets are learning systems that can automatically extract high-quality representations from raw images. However, their deployment in the medical field is still a bit challenging due to the lack of training data. In this paper, learned features that are obtained by training convolutional neural networks are compared with our proposed hand-crafted HSIFT features. The HSIFT feature is a symmetric fusion of a Harris corner detector and the Scale Invariance Transform process (SIFT) with BOVW representation. The SIFT process is enhanced as well as the classification technique by adopting bagging with a surrogate split method. Quantitative evaluation shows that our proposed hand-crafted HSIFT feature outperforms the learned features from convolutional neural networks in discriminating anatomy image classes. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

14 pages, 4870 KiB  
Article
A Descriptor-Based Advanced Feature Detector for Improved Visual Tracking
by Kai Yit Kok and Parvathy Rajendran
Symmetry 2021, 13(8), 1337; https://doi.org/10.3390/sym13081337 - 24 Jul 2021
Cited by 3 | Viewed by 2555
Abstract
Despite years of work, a robust, widely applicable generic “symmetry detector” that can paral-lel other kinds of computer vision/image processing tools for the more basic structural charac-teristics, such as a “edge” or “corner” detector, remains a computational challenge. A new symmetry feature detector [...] Read more.
Despite years of work, a robust, widely applicable generic “symmetry detector” that can paral-lel other kinds of computer vision/image processing tools for the more basic structural charac-teristics, such as a “edge” or “corner” detector, remains a computational challenge. A new symmetry feature detector with a descriptor is proposed in this paper, namely the Simple Robust Features (SRF) algorithm. A performance comparison is made among SRF with SRF, Speeded-up Robust Features (SURF) with SURF, Maximally Stable Extremal Regions (MSER) with SURF, Harris with Fast Retina Keypoint (FREAK), Minimum Eigenvalue with FREAK, Features from Accelerated Segment Test (FAST) with FREAK, and Binary Robust Invariant Scalable Keypoints (BRISK) with FREAK. A visual tracking dataset is used in this performance evaluation in terms of accuracy and computational cost. The results have shown that combining the SRF detector with the SRF descriptor is preferable, as it has on average the highest accuracy. Additionally, the computational cost of SRF with SRF is much lower than the others. Full article
(This article belongs to the Special Issue Symmetry in Computer Vision and Its Applications)
Show Figures

Figure 1

18 pages, 4201 KiB  
Article
Automatic Sub-Pixel Co-Registration of Remote Sensing Images Using Phase Correlation and Harris Detector
by Laila Rasmy, Imane Sebari and Mohamed Ettarid
Remote Sens. 2021, 13(12), 2314; https://doi.org/10.3390/rs13122314 - 12 Jun 2021
Cited by 23 | Viewed by 5075
Abstract
In this paper, we propose a new approach for sub-pixel co-registration based on Fourier phase correlation combined with the Harris detector. Due to the limitation of the standard phase correlation method to achieve only pixel-level accuracy, another approach is required to reach sub-pixel [...] Read more.
In this paper, we propose a new approach for sub-pixel co-registration based on Fourier phase correlation combined with the Harris detector. Due to the limitation of the standard phase correlation method to achieve only pixel-level accuracy, another approach is required to reach sub-pixel matching precision. We first applied the Harris corner detector to extract corners from both references and sensed images. Then, we identified their corresponding points using phase correlation between the image pairs. To achieve sub-pixel registration accuracy, two optimization algorithms were used. The effectiveness of the proposed method was tested with very high-resolution (VHR) remote sensing images, including Pleiades satellite images and aerial imagery. Compared with the speeded-up robust features (SURF)-based method, phase correlation with the Blackman window function produced 91% more matches with high reliability. Moreover, the results of the optimization analysis have revealed that Nelder–Mead algorithm performs better than the two-point step size gradient algorithm regarding localization accuracy and computation time. The proposed approach achieves better accuracy than 0.5 pixels and outperforms the speeded-up robust features (SURF)-based method. It can achieve sub-pixel accuracy in the presence of noise and produces large numbers of correct matching points. Full article
(This article belongs to the Special Issue Correction of Remotely Sensed Imagery)
Show Figures

Graphical abstract

14 pages, 9695 KiB  
Article
Flame Detection Using Appearance-Based Pre-Processing and Convolutional Neural Network
by Jinkyu Ryu and Dongkurl Kwak
Appl. Sci. 2021, 11(11), 5138; https://doi.org/10.3390/app11115138 - 31 May 2021
Cited by 31 | Viewed by 3733
Abstract
It is important for fire detectors to operate quickly in the event of a fire, but existing conventional fire detectors sometimes do not work properly or there are problems where non-fire or false reporting occurs frequently. Therefore, in this study, HSV color conversion [...] Read more.
It is important for fire detectors to operate quickly in the event of a fire, but existing conventional fire detectors sometimes do not work properly or there are problems where non-fire or false reporting occurs frequently. Therefore, in this study, HSV color conversion and Harris Corner Detection were used in the image pre-processing step to reduce the incidence of false detections. In addition, among the detected corners, the vicinity of the corner point facing the upper direction was extracted as a region of interest (ROI), and the fire was determined using a convolutional neural network (CNN). These methods were designed to detect the appearance of flames based on top-pointing properties, which resulted in higher accuracy and higher precision than when input images were still used in conventional object detection algorithms. This also reduced the false detection rate for non-fires, enabling high-precision fire detection. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 4847 KiB  
Article
Combining Sparse and Dense Features to Improve Multi-Modal Registration for Brain DTI Images
by Simona Moldovanu, Lenuta Pană Toporaș, Anjan Biswas and Luminita Moraru
Entropy 2020, 22(11), 1299; https://doi.org/10.3390/e22111299 - 14 Nov 2020
Cited by 10 | Viewed by 3470
Abstract
A new solution to overcome the constraints of multimodality medical intra-subject image registration is proposed, using the mutual information (MI) of image histogram-oriented gradients as a new matching criterion. We present a rigid, multi-modal image registration algorithm based on linear transformation and oriented [...] Read more.
A new solution to overcome the constraints of multimodality medical intra-subject image registration is proposed, using the mutual information (MI) of image histogram-oriented gradients as a new matching criterion. We present a rigid, multi-modal image registration algorithm based on linear transformation and oriented gradients for the alignment of T2-weighted (T2w) images (as a fixed reference) and diffusion tensor imaging (DTI) (b-values of 500 and 1250 s/mm2) as floating images of three patients to compensate for the motion during the acquisition process. Diffusion MRI is very sensitive to motion, especially when the intensity and duration of the gradient pulses (characterized by the b-value) increases. The proposed method relies on the whole brain surface and addresses the variability of anatomical features into an image stack. The sparse features refer to corners detected using the Harris corner detector operator, while dense features use all image pixels through the image histogram of oriented gradients (HOG) as a measure of the degree of statistical dependence between a pair of registered images. HOG as a dense feature is focused on the structure and extracts the oriented gradient image in the x and y directions. MI is used as an objective function for the optimization process. The entropy functions and joint entropy function are determined using the HOGs data. To determine the best image transformation, the fiducial registration error (FRE) measure is used. We compare the results against the MI-based intensities results computed using a statistical intensity relationship between corresponding pixels in source and target images. Our approach, which is devoted to the whole brain, shows improved registration accuracy, robustness, and computational cost compared with the registration algorithms, which use anatomical features or regions of interest areas with specific neuroanatomy. Despite the supplementary HOG computation task, the computation time is comparable for MI-based intensities and MI-based HOG methods. Full article
(This article belongs to the Special Issue Entropy Based Image Registration)
Show Figures

Figure 1

27 pages, 15509 KiB  
Article
A Robust Algorithm Based on Phase Congruency for Optical and SAR Image Registration in Suburban Areas
by Lina Wang, Mingchao Sun, Jinghong Liu, Lihua Cao and Guoqing Ma
Remote Sens. 2020, 12(20), 3339; https://doi.org/10.3390/rs12203339 - 13 Oct 2020
Cited by 30 | Viewed by 4248
Abstract
Automatic registration of optical and synthetic aperture radar (SAR) images is a challenging task due to the influence of SAR speckle noise and nonlinear radiometric differences. This study proposes a robust algorithm based on phase congruency to register optical and SAR images (ROS-PC). [...] Read more.
Automatic registration of optical and synthetic aperture radar (SAR) images is a challenging task due to the influence of SAR speckle noise and nonlinear radiometric differences. This study proposes a robust algorithm based on phase congruency to register optical and SAR images (ROS-PC). It consists of a uniform Harris feature detection method based on multi-moment of the phase congruency map (UMPC-Harris) and a local feature descriptor based on the histogram of phase congruency orientation on multi-scale max amplitude index maps (HOSMI). The UMPC-Harris detects corners and edge points based on a voting strategy, the multi-moment of phase congruency maps, and an overlapping block strategy, which is used to detect stable and uniformly distributed keypoints. Subsequently, HOSMI is derived for a keypoint by utilizing the histogram of phase congruency orientation on multi-scale max amplitude index maps, which effectively increases the discriminability and robustness of the final descriptor. Finally, experimental results obtained using simulated images show that the UMPC-Harris detector has a superior repeatability rate. The image registration results obtained on test images show that the ROS-PC is robust against SAR speckle noise and nonlinear radiometric differences. The ROS-PC can tolerate some rotational and scale changes. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

14 pages, 14103 KiB  
Article
Robust and Efficient Corner Detector Using Non-Corners Exclusion
by Tao Luo, Zaifeng Shi and Pumeng Wang
Appl. Sci. 2020, 10(2), 443; https://doi.org/10.3390/app10020443 - 7 Jan 2020
Cited by 14 | Viewed by 7673
Abstract
Corner detection is a traditional type of feature point detection method. Among methods used, with its good accuracy and the properties of invariance for rotation, noise and illumination, the Harris corner detector is widely used in the fields of vision tasks and image [...] Read more.
Corner detection is a traditional type of feature point detection method. Among methods used, with its good accuracy and the properties of invariance for rotation, noise and illumination, the Harris corner detector is widely used in the fields of vision tasks and image processing. Although it possesses a good performance in detection quality, its application is limited due to its low detection efficiency. The efficiency is crucial in many applications because it determines whether the detector is suitable for real-time tasks. In this paper, a robust and efficient corner detector (RECD) improved from Harris corner detector is proposed. First, we borrowed the principle of the feature from accelerated segment test (FAST) algorithm for corner pre-detection, in order to rule out non-corners and retain many strong corners as real corners. Those uncertain corners are looked at as candidate corners. Second, the gradients are calculated in the same way as the original Harris detector for those candidate corners. Third, to reduce additional computation amount, only the corner response function (CRF) of the candidate corners is calculated. Finally, we replace the highly complex non-maximum suppression (NMS) by an improved NMS to obtain the resulting corners. Experiments demonstrate that RECD is more competitive than some popular corner detectors in detection quality and speed. The accuracy and robustness of our method is slightly better than the original Harris detector, and the detection time is only approximately 8.2% of its original value. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

42 pages, 26216 KiB  
Article
Development of an Image Processing Module for Autonomous Underwater Vehicles through Integration of Visual Recognition with Stereoscopic Image Reconstruction
by Yu-Hsien Lin, Shao-Yu Chen and Chia-Hung Tsou
J. Mar. Sci. Eng. 2019, 7(4), 107; https://doi.org/10.3390/jmse7040107 - 18 Apr 2019
Cited by 17 | Viewed by 6307
Abstract
This study investigated the development of visual recognition and stereoscopic imaging technology, applying them to the construction of an image processing system for autonomous underwater vehicles (AUVs). For the proposed visual recognition technology, a Hough transform was combined with an optical flow algorithm [...] Read more.
This study investigated the development of visual recognition and stereoscopic imaging technology, applying them to the construction of an image processing system for autonomous underwater vehicles (AUVs). For the proposed visual recognition technology, a Hough transform was combined with an optical flow algorithm to detect the linear features and movement speeds of dynamic images; the proposed stereoscopic imaging technique employed a Harris corner detector to estimate the distance of the target. A physical AUV was constructed with a wide-angle lens camera and a binocular vision device mounted on the bow to provide image input. Subsequently, a simulation environment was established in Simscape Multibody and used to control the post-driver system of the stern, which contained horizontal and vertical rudder planes as well as the propeller. In static testing at National Cheng Kung University, physical targets were placed in a stability water tank; the study compared the analysis results obtained from various brightness and turbidity conditions in out-of-water and underwater environments. Finally, the dynamic testing results were combined with a fuzzy controller to output the real-time responses of the vehicle regarding the angles, rates of the rudder planes, and the propeller revolution speeds at various distances. Full article
(This article belongs to the Special Issue Underwater Technology—Hydrodynamics and Control System)
Show Figures

Figure 1

20 pages, 1993 KiB  
Article
High-Throughput Line Buffer Microarchitecture for Arbitrary Sized Streaming Image Processing
by Runbin Shi, Justin S.J. Wong and Hayden K.-H. So
J. Imaging 2019, 5(3), 34; https://doi.org/10.3390/jimaging5030034 - 6 Mar 2019
Cited by 4 | Viewed by 9070
Abstract
Parallel hardware designed for image processing promotes vision-guided intelligent applications. With the advantages of high-throughput and low-latency, streaming architecture on FPGA is especially attractive to real-time image processing. Notably, many real-world applications, such as region of interest (ROI) detection, demand the ability to [...] Read more.
Parallel hardware designed for image processing promotes vision-guided intelligent applications. With the advantages of high-throughput and low-latency, streaming architecture on FPGA is especially attractive to real-time image processing. Notably, many real-world applications, such as region of interest (ROI) detection, demand the ability to process images continuously at different sizes and resolutions in hardware without interruptions. FPGA is especially suitable for implementation of such flexible streaming architecture, but most existing solutions require run-time reconfiguration, and hence cannot achieve seamless image size-switching. In this paper, we propose a dynamically-programmable buffer architecture (D-SWIM) based on the Stream-Windowing Interleaved Memory (SWIM) architecture to realize image processing on FPGA for image streams at arbitrary sizes defined at run time. D-SWIM redefines the way that on-chip memory is organized and controlled, and the hardware adapts to arbitrary image size with sub-100 ns delay that ensures minimum interruptions to the image processing at a high frame rate. Compared to the prior SWIM buffer for high-throughput scenarios, D-SWIM achieved dynamic programmability with only a slight overhead on logic resource usage, but saved up to 56 % of the BRAM resource. The D-SWIM buffer achieves a max operating frequency of 329.5 MHz and reduction in power consumption by 45.7 % comparing with the SWIM scheme. Real-world image processing applications, such as 2D-Convolution and the Harris Corner Detector, have also been used to evaluate D-SWIM’s performance, where a pixel throughput of 4.5 Giga Pixel/s and 4.2 Giga Pixel/s were achieved respectively in each case. Compared to the implementation with prior streaming frameworks, the D-SWIM-based design not only realizes seamless image size-switching, but also improves hardware efficiency up to 30 × . Full article
(This article belongs to the Special Issue Image Processing Using FPGAs)
Show Figures

Figure 1

Back to TopTop