Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = Gaussian–Sobel

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 75388 KB  
Article
High-Fidelity 3D Gaussian Splatting for Exposure-Bracketing Space Target Reconstruction: OBB-Guided Regional Densification with Sobel Edge Regularization
by Yijin Jiang, Xiaoyuan Ren, Huanyu Yin, Libing Jiang, Canyu Wang and Zhuang Wang
Remote Sens. 2025, 17(12), 2020; https://doi.org/10.3390/rs17122020 - 11 Jun 2025
Viewed by 2756
Abstract
In this paper, a novel optimization framework based on 3D Gaussian splatting (3DGS) for high-fidelity 3D reconstruction of space targets under exposure bracketing conditions is studied. In the considered scenario, multi-view optical imagery captures space targets under complex and dynamic illumination, where severe [...] Read more.
In this paper, a novel optimization framework based on 3D Gaussian splatting (3DGS) for high-fidelity 3D reconstruction of space targets under exposure bracketing conditions is studied. In the considered scenario, multi-view optical imagery captures space targets under complex and dynamic illumination, where severe inter-frame brightness variations degrade reconstruction quality by introducing photometric inconsistencies and blurring fine geometric details. Unlike existing methods, we explicitly address these challenges by integrating exposure-aware adaptive refinement and edge-preserving regularization into the 3DGS pipeline. Specifically, we propose an exposure bracketing-oriented bounding box (OBB) regional densification strategy to dynamically identify and refine under-reconstructed regions. In addition, we introduce a Sobel edge regularization mechanism to guide the learning of sharp geometric features and improve texture fidelity. To validate the framework, experiments are conducted on both a custom OBR-ST dataset and the public SHIRT dataset, demonstrating that our method significantly outperforms state-of-the-art techniques in geometric accuracy and visual quality under exposure-bracketing scenarios. The results highlight the effectiveness of our approach in enabling robust in-orbit perception for space applications. Full article
(This article belongs to the Special Issue Advances in 3D Reconstruction with High-Resolution Satellite Data)
Show Figures

Graphical abstract

23 pages, 13758 KB  
Article
Edge–Region Collaborative Segmentation of Potato Leaf Disease Images Using Beluga Whale Optimization Algorithm with Danger Sensing Mechanism
by Jin-Ling Bei and Ji-Quan Wang
Agriculture 2025, 15(11), 1123; https://doi.org/10.3390/agriculture15111123 - 23 May 2025
Viewed by 446
Abstract
Precise detection of potato diseases is critical for food security, yet traditional image segmentation methods struggle with challenges including uneven illumination, background noise, and the gradual color transitions of lesions under complex field conditions. Therefore, a collaborative segmentation framework of Otsu and Sobel [...] Read more.
Precise detection of potato diseases is critical for food security, yet traditional image segmentation methods struggle with challenges including uneven illumination, background noise, and the gradual color transitions of lesions under complex field conditions. Therefore, a collaborative segmentation framework of Otsu and Sobel edge detection based on the beluga whale optimization algorithm with a danger sensing mechanism (DSBWO) is proposed. The method introduces an S-shaped control parameter, a danger sensing mechanism, a dynamic foraging strategy, and an improved whale fall model to enhance global search ability, prevent premature convergence, and improve solution quality. DSBWO demonstrates superior optimization performance on the CEC2017 benchmark, with faster convergence and higher accuracy than other algorithms. Experiments on the Berkeley Segmentation Dataset and potato early/late blight images show that DSBWO achieves excellent segmentation performance across multiple evaluation metrics. Specifically, it reaches a maximum IoU of 0.8797, outperforming JSBWO (0.8482) and PSOSHO (0.8503), while maintaining competitive PSNR and SSIM values. Even under different Gaussian noise levels, DSBWO maintains stable segmentation accuracy and low CPU time, confirming its robustness. These findings suggest that DSBWO provides a reliable and efficient solution for automatic crop disease monitoring and can be extended to other smart agriculture applications. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

17 pages, 18022 KB  
Article
A Multiscale Gradient Fusion Method for Color Image Edge Detection Using CBM3D Filtering
by Zhunruo Feng, Ruomeng Shi, Yuhan Jiang, Yiming Han, Zeyang Ma and Yuheng Ren
Sensors 2025, 25(7), 2031; https://doi.org/10.3390/s25072031 - 24 Mar 2025
Cited by 8 | Viewed by 1072
Abstract
In this paper, we present a novel color edge detection method that integrates collaborative filtering with multiscale gradient fusion. The Block-Matching and 3D (BM3D) filter is utilized to enhance sparse representations in the transform domain, effectively reducing noise. The multiscale gradient fusion technique [...] Read more.
In this paper, we present a novel color edge detection method that integrates collaborative filtering with multiscale gradient fusion. The Block-Matching and 3D (BM3D) filter is utilized to enhance sparse representations in the transform domain, effectively reducing noise. The multiscale gradient fusion technique compensates for the loss of detail in single-scale edge detection, thereby improving both edge resolution and overall quality. RGB images from the dataset are converted into the XYZ color space through mathematical transformations. The Colored Block-Matching and 3D (CBM3D) filter is applied to the sparse images to reduce noise. Next, the vector gradients of the color image and anisotropic Gaussian directional derivatives for two scale parameters are computed. These are then averaged pixel-by-pixel to generate a refined edge strength map. To enhance the edge features, the image undergoes normalization and non-maximum suppression. This is followed by edge contour extraction using double-thresholding and a novel morphological refinement technique. Experimental results on the edge detection dataset demonstrate that the proposed method offers robust noise resistance and superior edge quality, outperforming traditional methods such as Color Sobel, Color Canny, SE, and Color AGDD, as evidenced by performance metrics including the PR curve, AUC, PSNR, MSE, and FOM. Full article
(This article belongs to the Special Issue Digital Twin-Enabled Deep Learning for Machinery Health Monitoring)
Show Figures

Figure 1

17 pages, 17602 KB  
Article
Enhancing Detection of Pedestrians in Low-Light Conditions by Accentuating Gaussian–Sobel Edge Features from Depth Maps
by Minyoung Jung and Jeongho Cho
Appl. Sci. 2024, 14(18), 8326; https://doi.org/10.3390/app14188326 - 15 Sep 2024
Cited by 6 | Viewed by 2333
Abstract
Owing to the low detection accuracy of camera-based object detection models, various fusion techniques with Light Detection and Ranging (LiDAR) have been attempted. This has resulted in improved detection of objects that are difficult to detect due to partial occlusion by obstacles or [...] Read more.
Owing to the low detection accuracy of camera-based object detection models, various fusion techniques with Light Detection and Ranging (LiDAR) have been attempted. This has resulted in improved detection of objects that are difficult to detect due to partial occlusion by obstacles or unclear silhouettes. However, the detection performance remains limited in low-light environments where small pedestrians are located far from the sensor or pedestrians have difficult-to-estimate shapes. This study proposes an object detection model that employs a Gaussian–Sobel filter. This filter combines Gaussian blurring, which suppresses the effects of noise, and a Sobel mask, which accentuates object features, to effectively utilize depth maps generated by LiDAR for object detection. The model performs independent pedestrian detection using the real-time object detection model You Only Look Once v4, based on RGB images obtained using a camera and depth maps preprocessed by the Gaussian–Sobel filter, and estimates the optimal pedestrian location using non-maximum suppression. This enables accurate pedestrian detection while maintaining a high detection accuracy even in low-light or external-noise environments, where object features and contours are not well defined. The test evaluation results demonstrated that the proposed method achieved at least 1–7% higher average precision than the state-of-the-art models under various environments. Full article
(This article belongs to the Special Issue Object Detection and Image Classification)
Show Figures

Figure 1

17 pages, 8897 KB  
Article
The Detection of Railheads: An Innovative Direct Image Processing Method
by Volodymyr Tverdomed, Zhuk Dmytro, Natalia Kokriatska and Vaidas Lukoševičius
Sustainability 2024, 16(12), 5109; https://doi.org/10.3390/su16125109 - 15 Jun 2024
Cited by 1 | Viewed by 2074
Abstract
This study presents a fully automated railhead detection method based on a direct image processing algorithm for use on a railway track. This method functions at a much faster pace than artificial intelligence algorithms that process rail images on embedded systems or low-power [...] Read more.
This study presents a fully automated railhead detection method based on a direct image processing algorithm for use on a railway track. This method functions at a much faster pace than artificial intelligence algorithms that process rail images on embedded systems or low-power devices, as it does not require the use of significant computing resources. With the use of this method, railheads can be analyzed to identify the presence of cracks and other defects. We converted color images to halftone images, performed histogram equalizations to improve the contrast, applied a Gaussian filter to reduce the presence of noise, utilized convolutional filters to extract any vertical and horizontal lines, applied the Canny method and Sobel filters to refine the boundaries of the extracted lines, applied the Hough transform technique to extract lines belonging to the railhead images, and identified the segments with the highest brightness values to process the images of the railheads under study. The method of railhead separation described in this article will allow for further comprehensive diagnostics of the condition of rail threads to ensure the safe and sustainable operation of railway transport. The implementation of intelligent maintenance systems and effective monitoring of railway track conditions can reduce the negative impact on the environment and contribute to the advancement of rail transport as a sustainable, safe, and more environmentally friendly mode of transportation. Full article
(This article belongs to the Special Issue Sustainable Railway Construction, Operation and Transportation)
Show Figures

Figure 1

13 pages, 6028 KB  
Proceeding Paper
A Pore Classification System for the Detection of Additive Manufacturing Defects Combining Machine Learning and Numerical Image Analysis
by Sahar Mahdie Klim Al-Zaidawi and Stefan Bosse
Eng. Proc. 2023, 58(1), 122; https://doi.org/10.3390/ecsa-10-16024 - 15 Nov 2023
Cited by 4 | Viewed by 1657
Abstract
This study aims to enhance additive manufacturing (AM) quality control. AM builds 3D objects layer by layer, potentially causing defects. High-resolution micrograph data capture internal material defects, e.g., pores, which are vital for evaluating material properties, but image acquisition and analysis are time-consuming. [...] Read more.
This study aims to enhance additive manufacturing (AM) quality control. AM builds 3D objects layer by layer, potentially causing defects. High-resolution micrograph data capture internal material defects, e.g., pores, which are vital for evaluating material properties, but image acquisition and analysis are time-consuming. This study introduces a hybrid machine learning (ML) approach that combines model-based image processing and data-driven supervised ML to detect and classify different pore types in AM micrograph data. Pixel-based features are extracted using, e.g., Sobel and Gaussian filters on the input micrograph image. Standard image processing algorithms detect pore defects, generating labels based on different features, e.g., area, convexity, aspect ratio, and circularity, and providing an automated feature labeling for training. This approach achieves sufficient accuracy by training a Random Forest as a hybrid-model data-driven classifier, compared with a pure data-driven model such as a CNN. Full article
Show Figures

Figure 1

13 pages, 3071 KB  
Article
Fourier Ptychography-Based Measurement of Beam Divergence Angle for Vertical Cavity Surface-Emitting Laser
by Leilei Jia, Xin Qian and Lingyu Ai
Photonics 2023, 10(7), 777; https://doi.org/10.3390/photonics10070777 - 4 Jul 2023
Viewed by 2339
Abstract
The Vertical Cavity Surface-Emitting Laser (VCSEL) has led to the rapid development of advanced fields such as communication, optical sensing, smart cars, and more. The accurate testing of VCSEL beam quality is an important prerequisite for its effective application. In this paper, a [...] Read more.
The Vertical Cavity Surface-Emitting Laser (VCSEL) has led to the rapid development of advanced fields such as communication, optical sensing, smart cars, and more. The accurate testing of VCSEL beam quality is an important prerequisite for its effective application. In this paper, a method for measuring the divergence angle of the VCSEL far field spot based on transmissive Fourier ptychography is proposed. First, a single CCD multi-angle VCSEL far-field spot acquisition system is designed. Second, based on the proposed Fourier ptychographic algorithm with synchronous optimization of embedded optical transfer function, a resolution-enhanced phase image of the spot is reconstructed and the boundary extracted by the Sobel operator of the phase image is defined as the boundary position of the beam waist. In this way, the beam waist radius of the laser beam is calculated. Finally, the divergence angle of the laser beam is measured via the radius of the beam waist. Compared with the traditional Gaussian beam definition method, the method proposed in this paper has higher accuracy in divergence angle measurement. The experimental results show that this method can improve the divergence angle measurement accuracy by up to 9.7%. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

12 pages, 20892 KB  
Article
Adaptive Feature Fusion and Kernel-Based Regression Modeling to Improve Blind Image Quality Assessment
by Jihyoung Ryu
Appl. Sci. 2023, 13(13), 7522; https://doi.org/10.3390/app13137522 - 26 Jun 2023
Cited by 2 | Viewed by 1479
Abstract
In the fields of image processing and computer vision, evaluating blind image quality (BIQA) is still a difficult task. In this paper, a unique BIQA framework is presented that integrates feature extraction, feature selection, and regression using a support vector machine (SVM). Various [...] Read more.
In the fields of image processing and computer vision, evaluating blind image quality (BIQA) is still a difficult task. In this paper, a unique BIQA framework is presented that integrates feature extraction, feature selection, and regression using a support vector machine (SVM). Various image characteristics are included in the framework, such as wavelet transform, prewitt and gaussian, log and gaussian, and prewitt, sobel, and gaussian. An SVM regression model is trained using these features to predict the quality ratings of photographs. The proposed model uses the Information Gain attribute approach for feature selection to improve the performance of the regression model and decrease the size of the feature space. Three commonly used benchmark datasets, TID2013, CSIQ, and LIVE, are utilized to assess the performance of the proposed methodology. The study examines how various feature types and feature selection strategies affect the functionality of the framework through thorough experiments. The experimental findings demonstrate that our suggested framework reaches the highest levels of accuracy and robustness. This suggests that it has a lot of potential to improve the accuracy and dependability of BIQA approaches. Additionally, its use is broadened to include image transmission, compression, and restoration. Overall, the results demonstrate our framework’s promise and ability to advance studies into image quality assessment. Full article
Show Figures

Figure 1

16 pages, 9702 KB  
Article
Method and Installation for Efficient Automatic Defect Inspection of Manufactured Paper Bowls
by Shaoyong Yu, Yang-Han Lee, Cheng-Wen Chen, Peng Gao, Zhigang Xu, Shunyi Chen and Cheng-Fu Yang
Photonics 2023, 10(6), 686; https://doi.org/10.3390/photonics10060686 - 14 Jun 2023
Cited by 2 | Viewed by 1873
Abstract
Various techniques were combined to optimize an optical inspection system designed to automatically inspect defects in manufactured paper bowls. A self-assembled system was utilized to capture images of defects on the bowls. The system employed an image sensor with a multi-pixel array that [...] Read more.
Various techniques were combined to optimize an optical inspection system designed to automatically inspect defects in manufactured paper bowls. A self-assembled system was utilized to capture images of defects on the bowls. The system employed an image sensor with a multi-pixel array that combined a complementary metal-oxide semiconductor and a photo detector. A combined ring light served as the light source, while an infrared (IR) LED matrix panel was used to provide constant IR light to highlight the outer edges of the objects being inspected. The techniques employed in this study to enhance defect inspections on produced paper bowls included Gaussian filtering, Sobel operators, binarization, and connected components. Captured images were processed using these technologies. Once the non-contact inspection system’s machine vision method was completed, defects on the produced paper bowls were inspected using the system developed in this study. Three inspection methods were used in this study: internal inspection, external inspection, and bottom inspection. All three methods were able to inspect surface features of produced paper bowls, including dirt, burrs, holes, and uneven thickness. The results of our study showed that the average time required for machine vision inspections of each paper bowl was significantly less than the time required for manual inspection. Therefore, the investigated machine vision system is an efficient method for inspecting defects in fabricated paper bowls. Full article
(This article belongs to the Special Issue Advanced Photonics Sensors, Sources, Systems and Applications)
Show Figures

Figure 1

19 pages, 5966 KB  
Article
Lane Line Detection and Object Scene Segmentation Using Otsu Thresholding and the Fast Hough Transform for Intelligent Vehicles in Complex Road Conditions
by Muhammad Awais Javeed, Muhammad Arslan Ghaffar, Muhammad Awais Ashraf, Nimra Zubair, Ahmed Sayed M. Metwally, Elsayed M. Tag-Eldin, Patrizia Bocchetta, Muhammad Sufyan Javed and Xingfang Jiang
Electronics 2023, 12(5), 1079; https://doi.org/10.3390/electronics12051079 - 21 Feb 2023
Cited by 29 | Viewed by 6552
Abstract
An Otsu-threshold- and Canny-edge-detection-based fast Hough transform (FHT) approach to lane detection was proposed to improve the accuracy of lane detection for autonomous vehicle driving. During the last two decades, autonomous vehicles have become very popular, and it is constructive to avoid traffic [...] Read more.
An Otsu-threshold- and Canny-edge-detection-based fast Hough transform (FHT) approach to lane detection was proposed to improve the accuracy of lane detection for autonomous vehicle driving. During the last two decades, autonomous vehicles have become very popular, and it is constructive to avoid traffic accidents due to human mistakes. The new generation needs automatic vehicle intelligence. One of the essential functions of a cutting-edge automobile system is lane detection. This study recommended the idea of lane detection through improved (extended) Canny edge detection using a fast Hough transform. The Gaussian blur filter was used to smooth out the image and reduce noise, which could help to improve the edge detection accuracy. An edge detection operator known as the Sobel operator calculated the gradient of the image intensity to identify edges in an image using a convolutional kernel. These techniques were applied in the initial lane detection module to enhance the characteristics of the road lanes, making it easier to detect them in the image. The Hough transform was then used to identify the routes based on the mathematical relationship between the lanes and the vehicle. It did this by converting the image into a polar coordinate system and looking for lines within a specific range of contrasting points. This allowed the algorithm to distinguish between the lanes and other features in the image. After this, the Hough transform was used for lane detection, making it possible to distinguish between left and right lane marking detection extraction; the region of interest (ROI) must be extracted for traditional approaches to work effectively and easily. The proposed methodology was tested on several image sequences. The least-squares fitting in this region was then used to track the lane. The proposed system demonstrated high lane detection in experiments, demonstrating that the identification method performed well regarding reasoning speed and identification accuracy, which considered both accuracy and real-time processing and could satisfy the requirements of lane recognition for lightweight automatic driving systems. Full article
Show Figures

Figure 1

21 pages, 8929 KB  
Article
Research and Evaluation on an Optical Automatic Detection System for the Defects of the Manufactured Paper Cups
by Ping Wang, Yang-Han Lee, Hsien-Wei Tseng and Cheng-Fu Yang
Sensors 2023, 23(3), 1452; https://doi.org/10.3390/s23031452 - 28 Jan 2023
Cited by 3 | Viewed by 2975
Abstract
In this paper, the paper cups were used as the research objects, and the machine vision detection technology was combined with different image processing techniques to investigate a non-contact optical automatic detection system to identify the defects of the manufactured paper cups. The [...] Read more.
In this paper, the paper cups were used as the research objects, and the machine vision detection technology was combined with different image processing techniques to investigate a non-contact optical automatic detection system to identify the defects of the manufactured paper cups. The combined ring light was used as the light source, an infrared (IR) LED matrix panel was used to provide the IR light to constantly highlight the outer edges of the detected objects, and a multi-grid pixel array was used as the image sensor. The image processing techniques, including the Gaussian filter, Sobel operator, Binarization process, and connected component, were used to enhance the inspection and recognition of the defects existing in the produced paper cups. There were three different detection processes for paper cups, which were divided into internal, external, and bottom image acquisition processes. The present study demonstrated that all the detection processes could clearly detect the surface defect features of the manufactured paper cups, such as dirt, burrs, holes, and uneven thickness. Our study also revealed that the average time for the investigated Automatic Optical Detection to detect the defects on the paper cups was only 0.3 s. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

24 pages, 23909 KB  
Article
EDTRS: A Superpixel Generation Method for SAR Images Segmentation Based on Edge Detection and Texture Region Selection
by Hang Yu, Haoran Jiang, Zhiheng Liu, Suiping Zhou and Xiangjie Yin
Remote Sens. 2022, 14(21), 5589; https://doi.org/10.3390/rs14215589 - 5 Nov 2022
Cited by 5 | Viewed by 3208
Abstract
The generation of superpixels is becoming a critical step in SAR image segmentation. However, most studies on superpixels only focused on clustering methods without considering multi-feature in SAR images. Generating superpixels for complex scenes is a challenging task. It is also time consuming [...] Read more.
The generation of superpixels is becoming a critical step in SAR image segmentation. However, most studies on superpixels only focused on clustering methods without considering multi-feature in SAR images. Generating superpixels for complex scenes is a challenging task. It is also time consuming and inconvenient to manually adjust the parameters to regularize the shapes of superpixels. To address these issues, we propose a new superpixel generation method for SAR images based on edge detection and texture region selection (EDTRS), which takes into account the different features of SAR images. Firstly, a Gaussian function is applied in the neighborhood of each pixel in eight directions, and a Sobel operator is used to determine the redefined region. Then, 2D entropy is introduced to adjust the edge map. Secondly, local outlier factor (LOF) detection is used to eliminate speckle-noise interference in SAR images. We judge whether the texture has periodicity and introduce an edge map to select the appropriate region and extract texture features for the target pixel. A gray-level co-occurrence matrix (GLCM) and principal component analysis (PCA) are combined to extract texture features. Finally, we use a novel approach to combine the features extracted, and the pixels are clustered by the K-means method. Experimental results with different SAR images show that the proposed method outperforms existing superpixel generation methods with an increase of 5–10% in accuracy and produces more regular shapes. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

22 pages, 9964 KB  
Article
Using Improved Edge Detection Method to Detect Mining-Induced Ground Fissures Identified by Unmanned Aerial Vehicle Remote Sensing
by Duo Xu, Yixin Zhao, Yaodong Jiang, Cun Zhang, Bo Sun and Xiang He
Remote Sens. 2021, 13(18), 3652; https://doi.org/10.3390/rs13183652 - 13 Sep 2021
Cited by 29 | Viewed by 3550
Abstract
Information on the ground fissures induced by coal mining is important to the safety of coal mine production and the management of environment in the mining area. In order to identify these fissures timely and accurately, a new method was proposed in the [...] Read more.
Information on the ground fissures induced by coal mining is important to the safety of coal mine production and the management of environment in the mining area. In order to identify these fissures timely and accurately, a new method was proposed in the present paper, which is based on an unmanned aerial vehicle (UAV) equipped with a visible light camera and an infrared camera. According to such equipment, edge detection technology was used to detect mining-induced ground fissures. Field experiments show high efficiency of the UAV in monitoring the mining-induced ground fissures. Furthermore, a reasonable time period between 3:00 am and 5:00 am under the studied conditions helps UAV infrared remote sensing identify fissures preferably. The Roberts operator, Sobel operator, Prewitt operator, Canny operator and Laplacian operator were tested to detect the fissures in the visible image, infrared image and fused image. An improved edge detection method was proposed which based on the Laplacian of Gaussian, Canny and mathematical morphology operators. The peak signal-to-noise rate, effective edge rate, Pratt’s figure of merit and F-measure indicated that the proposed method was superior to the other methods. In addition, the fissures in infrared images at different times can be accurately detected by the proposed method except at 7:00 am, 1:00 pm and 3:00 pm. Full article
Show Figures

Graphical abstract

24 pages, 10884 KB  
Article
Mixed Noise Estimation Model for Optimized Kernel Minimum Noise Fraction Transformation in Hyperspectral Image Dimensionality Reduction
by Tianru Xue, Yueming Wang, Yuwei Chen, Jianxin Jia, Maoxing Wen, Ran Guo, Tianxiao Wu and Xuan Deng
Remote Sens. 2021, 13(13), 2607; https://doi.org/10.3390/rs13132607 - 2 Jul 2021
Cited by 18 | Viewed by 3159
Abstract
Dimensionality reduction (DR) is of great significance for simplifying and optimizing hyperspectral image (HSI) features. As a widely used DR method, kernel minimum noise fraction (KMNF) transformation preserves the high-order structures of the original data perfectly. However, the conventional KMNF noise estimation (KMNF-NE) [...] Read more.
Dimensionality reduction (DR) is of great significance for simplifying and optimizing hyperspectral image (HSI) features. As a widely used DR method, kernel minimum noise fraction (KMNF) transformation preserves the high-order structures of the original data perfectly. However, the conventional KMNF noise estimation (KMNF-NE) uses the local regression residual of neighbourhood pixels, which depends heavily on spatial information. Due to the limited spatial resolution, there are many mixed pixels in HSI, making KMNF-NE unreliable for noise estimation and leading to poor performance in KMNF for classification on HSIs with low spatial resolution. In order to overcome this problem, a mixed noise estimation model (MNEM) is proposed in this paper for optimized KMNF (OP-KMNF). The MNEM adopts the sequential and linear combination of the Gaussian prior denoising model, median filter, and Sobel operator to estimate noise. It retains more details and edge features, making it more suitable for noise estimation in KMNF. Experiments using several HSI datasets with different spatial and spectral resolutions are conducted. The results show that, compared with some other DR methods, the improvement of OP-KMNF in average classification accuracy is up to 4%. To improve the efficiency, the OP-KMNF was implemented on graphics processing units (GPU) and sped up by about 60× compared to the central processing unit (CPU) implementation. The outcome demonstrates the significant performance of OP-KMNF in terms of classification ability and execution efficiency. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

15 pages, 6298 KB  
Article
Development and Validation of an Online Analyzer for Particle Size Distribution in Conveyor Belts
by Claudio Leiva, Claudio Acuña and Diego Castillo
Minerals 2021, 11(6), 581; https://doi.org/10.3390/min11060581 - 30 May 2021
Cited by 5 | Viewed by 4131
Abstract
Online measurement of particle size distribution in the crushing process is critical to reduce particle obstruction and to reduce energy consumption. Nevertheless, commercial systems to determine size distribution do not accurately identify large particles (20–250 mm), leading to particle obstruction, increasing energy consumption, [...] Read more.
Online measurement of particle size distribution in the crushing process is critical to reduce particle obstruction and to reduce energy consumption. Nevertheless, commercial systems to determine size distribution do not accurately identify large particles (20–250 mm), leading to particle obstruction, increasing energy consumption, and reducing equipment availability. To solve this problem, an online sensor prototype was designed, implemented, and validated in a copper ore plant. The sensor is based on 2D images and specific detection algorithms. The system consists of a camera (1024 p) mounted on the conveyor belt and image processing software, which improves the detection of large particle edges. The algorithms determine the geometry of each particle, from a sequence of digital photographs. For the development of the software, noise reduction algorithms were evaluated and selected, and a routine was designed to incorporate morphological mathematics (erosion, dilation, opening, lock) and segmentation algorithms (Roberts, Prewitt, Sobel, Laplacian–Gaussian, Canny, watershed, geodesic transform). The software was implemented (in MatLab Image Processing Toolbox) based on the 3D equivalent diameter (using major and minor axes, assuming an oblate spheroid). The size distribution adjusted to the Rosin Rammler function in the major axis. To test the sensor capabilities, laboratory images were used, where the results show a precision of 5% in Rosin Rambler model fitting. To validate the large particle detection algorithms, a pilot test was implemented in a large mining company in Chile. The accuracy of large particle detection was 60% to 67% depending on the crushing stage. In conclusion, it is shown that the prototype and software allow online measurement of large particle sizes, which provides useful information for screening equipment maintenance and control of crushers’ open size setting, reducing the obstruction risk and increasing operational availability. Full article
(This article belongs to the Special Issue Process Optimization in Mineral Processing)
Show Figures

Figure 1

Back to TopTop