Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = fast dense feature-matching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7676 KB  
Article
A High-Precision Matching Method for Heterogeneous SAR Images Based on ROEWA and Angle-Weighted Gradient
by Anxi Yu, Wenhao Tong, Zhengbin Wang, Keke Zhang and Zhen Dong
Remote Sens. 2025, 17(5), 749; https://doi.org/10.3390/rs17050749 - 21 Feb 2025
Cited by 1 | Viewed by 992
Abstract
The prerequisite for the fusion processing of heterogeneous SAR images lies in high-precision image matching, which can be widely applied in areas such as geometric localization, scene matching navigation, and target recognition. This study proposes a method for high-precision matching of heterogeneous SAR [...] Read more.
The prerequisite for the fusion processing of heterogeneous SAR images lies in high-precision image matching, which can be widely applied in areas such as geometric localization, scene matching navigation, and target recognition. This study proposes a method for high-precision matching of heterogeneous SAR images based on the combination of the single-scale ratio of an exponentially weighted averages (ROEWA) operator and angle-weighted gradient (RAWG). The method consists of the following three main steps: feature point extraction, feature description, and feature matching. The algorithm utilizes the block-based SAR-Harris operator to extract feature points from the reference SAR image, effectively combating the interference of coherent speckle noise and improving the uniformity of feature point distribution. By employing the single-scale ROEWA operator in conjunction with angle-weighted gradient projection, the construction of a 3D dense feature descriptor is achieved, enhancing the consistency of gradient features in heterogeneous SAR images and smoothing the search surface. Through the optimal feature construction strategy and frequency domain SSD algorithm, fast template matching is realized. Experimental comparisons with other mainstream matching methods demonstrate that the Root Mean Square Error (RMSE) of our method is reduced by 47.5% compared with CFOG, and compared with HOPES, the error is reduced by 15.4% and the matching time is reduced by 34.3%. The proposed approach effectively addresses the nonlinear intensity differences, geometric disparities, and interference of coherent speckle noise in heterogeneous SAR images. It exhibits robustness, high precision, and efficiency as its prominent advantages. Full article
(This article belongs to the Special Issue Temporal and Spatial Analysis of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

21 pages, 12827 KB  
Article
Research on the Registration of Aerial Images of Cyclobalanopsis Natural Forest Based on Optimized Fast Sample Consensus Point Matching with SIFT Features
by Peng Wu, Hailong Liu, Xiaomei Yi, Lufeng Mo, Guoying Wang and Shuai Ma
Forests 2024, 15(11), 1908; https://doi.org/10.3390/f15111908 - 29 Oct 2024
Viewed by 1395
Abstract
The effective management and conservation of forest resources hinge on accurate monitoring. Nonetheless, individual remote-sensing images captured by low-altitude unmanned aerial vehicles (UAVs) fail to encapsulate the entirety of a forest’s characteristics. The application of image-stitching technology to high-resolution drone imagery facilitates a [...] Read more.
The effective management and conservation of forest resources hinge on accurate monitoring. Nonetheless, individual remote-sensing images captured by low-altitude unmanned aerial vehicles (UAVs) fail to encapsulate the entirety of a forest’s characteristics. The application of image-stitching technology to high-resolution drone imagery facilitates a prompt evaluation of forest resources, encompassing quantity, quality, and spatial distribution. This study introduces an improved SIFT algorithm designed to tackle the challenges of low matching rates and prolonged registration times encountered with forest images characterized by dense textures. By implementing the SIFT-OCT (SIFT omitting the initial scale space) approach, the algorithm bypasses the initial scale space, thereby reducing the number of ineffective feature points and augmenting processing efficiency. To bolster the SIFT algorithm’s resilience against rotation and illumination variations, and to furnish supplementary information for registration even when fewer valid feature points are available, a gradient location and orientation histogram (GLOH) descriptor is integrated. For feature matching, the more computationally efficient Manhattan distance is utilized to filter feature points, which further optimizes efficiency. The fast sample consensus (FSC) algorithm is then applied to remove mismatched point pairs, thus refining registration accuracy. This research also investigates the influence of vegetation coverage and image overlap rates on the algorithm’s efficacy, using five sets of Cyclobalanopsis natural forest images. Experimental outcomes reveal that the proposed method significantly reduces registration time by an average of 3.66 times compared to that of SIFT, 1.71 times compared to that of SIFT-OCT, 5.67 times compared to that of PSO-SIFT, and 3.42 times compared to that of KAZE, demonstrating its superior performance. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

22 pages, 16538 KB  
Article
BY-SLAM: Dynamic Visual SLAM System Based on BEBLID and Semantic Information Extraction
by Daixian Zhu, Peixuan Liu, Qiang Qiu, Jiaxin Wei and Ruolin Gong
Sensors 2024, 24(14), 4693; https://doi.org/10.3390/s24144693 - 19 Jul 2024
Cited by 5 | Viewed by 2824
Abstract
SLAM is a critical technology for enabling autonomous navigation and positioning in unmanned vehicles. Traditional visual simultaneous localization and mapping algorithms are built upon the assumption of a static scene, overlooking the impact of dynamic targets within real-world environments. Interference from dynamic targets [...] Read more.
SLAM is a critical technology for enabling autonomous navigation and positioning in unmanned vehicles. Traditional visual simultaneous localization and mapping algorithms are built upon the assumption of a static scene, overlooking the impact of dynamic targets within real-world environments. Interference from dynamic targets can significantly degrade the system’s localization accuracy or even lead to tracking failure. To address these issues, we propose a dynamic visual SLAM system named BY-SLAM, which is based on BEBLID and semantic information extraction. Initially, the BEBLID descriptor is introduced to describe Oriented FAST feature points, enhancing both feature point matching accuracy and speed. Subsequently, FasterNet replaces the backbone network of YOLOv8s to expedite semantic information extraction. By using the results of DBSCAN clustering object detection, a more refined semantic mask is obtained. Finally, by leveraging the semantic mask and epipolar constraints, dynamic feature points are discerned and eliminated, allowing for the utilization of only static feature points for pose estimation and the construction of a dense 3D map that excludes dynamic targets. Experimental evaluations are conducted on both the TUM RGB-D dataset and real-world scenarios and demonstrate the effectiveness of the proposed algorithm at filtering out dynamic targets within the scenes. On average, the localization accuracy for the TUM RGB-D dataset improves by 95.53% compared to ORB-SLAM3. Comparative analyses against classical dynamic SLAM systems further corroborate the improvement in localization accuracy, map readability, and robustness achieved by BY-SLAM. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

23 pages, 121030 KB  
Article
Dense Feature Matching for Hazard Detection and Avoidance Using Machine Learning in Complex Unstructured Scenarios
by Daniel Posada and Troy Henderson
Aerospace 2024, 11(5), 351; https://doi.org/10.3390/aerospace11050351 - 28 Apr 2024
Cited by 4 | Viewed by 3445
Abstract
Exploring the Moon and Mars are crucial steps in advancing space exploration. Numerous missions aim to land and research in various lunar locations, some of which possess challenging surfaces with unchanging features. Some of these areas are cataloged as lunar light plains. Their [...] Read more.
Exploring the Moon and Mars are crucial steps in advancing space exploration. Numerous missions aim to land and research in various lunar locations, some of which possess challenging surfaces with unchanging features. Some of these areas are cataloged as lunar light plains. Their main characteristics are that they are almost featureless and reflect more light than other lunar surfaces. This poses a challenge during navigation and landing. This paper compares traditional feature matching techniques, specifically scale-invariant feature transform and the oriented FAST and rotated BRIEF, and novel machine learning approaches for dense feature matching in challenging, unstructured scenarios, focusing on lunar light plains. Traditional feature detection methods often need help in environments characterized by uniform terrain and unique lighting conditions, where unique, distinguishable features are rare. Our study addresses these challenges and underscores the robustness of machine learning. The methodology involves an experimental analysis using images that mimic lunar-like landscapes, representing these light plains, to generate and compare feature maps derived from traditional and learning-based methods. These maps are evaluated based on their density and accuracy, which are critical for effective structure-from-motion reconstruction commonly utilized in navigation for landing. The results demonstrate that machine learning techniques enhance feature detection and matching, providing more intricate representations of environments with sparse features. This improvement indicates a significant potential for machine learning to boost hazard detection and avoidance in space exploration and other complex applications. Full article
Show Figures

Figure 1

18 pages, 3230 KB  
Article
Fast CU Decision Algorithm Based on CNN and Decision Trees for VVC
by Hongchan Li, Peng Zhang, Baohua Jin and Qiuwen Zhang
Electronics 2023, 12(14), 3053; https://doi.org/10.3390/electronics12143053 - 12 Jul 2023
Cited by 3 | Viewed by 2238
Abstract
Compared with the previous generation of High Efficiency Video Coding (HEVC), Versatile Video Coding (VVC) introduces a quadtree and multi-type tree (QTMT) partition structure with nested multi-class trees so that the coding unit (CU) partition can better match the video texture features. This [...] Read more.
Compared with the previous generation of High Efficiency Video Coding (HEVC), Versatile Video Coding (VVC) introduces a quadtree and multi-type tree (QTMT) partition structure with nested multi-class trees so that the coding unit (CU) partition can better match the video texture features. This partition structure makes the compression efficiency of VVC significantly improved, but the computational complexity is also significantly increased, resulting in an increase in encoding time. Therefore, we propose a fast CU partition decision algorithm based on DenseNet network and decision tree (DT) classifier to reduce the coding complexity of VVC and save more coding time. We extract spatial feature vectors based on the DenseNet network model. Spatial feature vectors are constructed by predicting the boundary probabilities of 4 × 4 blocks in 64 × 64 coding units. Then, using the spatial features as the input of the DT classifier, through the classification function of the DT classifier model, the top N division modes with higher prediction probability are selected, and other division modes are skipped to reduce the computational complexity. Finally, the optimal partition mode is selected by comparing the RD cost. Our proposed algorithm achieves 47.6% encoding time savings on VTM10.0, while BDBR only increases by 0.91%. Full article
Show Figures

Figure 1

22 pages, 1245 KB  
Article
Computer-Vision-Based Vibration Tracking Using a Digital Camera: A Sparse-Optical-Flow-Based Target Tracking Method
by Guang-Yu Nie, Saran Srikanth Bodda, Harleen Kaur Sandhu, Kevin Han and Abhinav Gupta
Sensors 2022, 22(18), 6869; https://doi.org/10.3390/s22186869 - 11 Sep 2022
Cited by 28 | Viewed by 6805
Abstract
Computer-vision-based target tracking is a technology applied to a wide range of research areas, including structural vibration monitoring. However, current target tracking methods suffer from noise in digital image processing. In this paper, a new target tracking method based on the sparse optical [...] Read more.
Computer-vision-based target tracking is a technology applied to a wide range of research areas, including structural vibration monitoring. However, current target tracking methods suffer from noise in digital image processing. In this paper, a new target tracking method based on the sparse optical flow technique is introduced for improving the accuracy in tracking the target, especially when the target has a large displacement. The proposed method utilizes the Oriented FAST and Rotated BRIEF (ORB) technique which is based on FAST (Features from Accelerated Segment Test), a feature detector, and BRIEF (Binary Robust Independent Elementary Features), a binary descriptor. ORB maintains a variety of keypoints and combines the multi-level strategy with an optical flow algorithm to search the keypoints with a large motion vector for tracking. Then, an outlier removal method based on Hamming distance and interquartile range (IQR) score is introduced to minimize the error. The proposed target tracking method is verified through a lab experiment—a three-story shear building structure subjected to various harmonic excitations. It is compared with existing sparse-optical-flow-based target tracking methods and target tracking methods based on three other types of techniques, i.e., feature matching, dense optical flow, and template matching. The results show that the performance of target tracking is greatly improved through the use of a multi-level strategy and the proposed outlier removal method. The proposed sparse-optical-flow-based target tracking method achieves the best accuracy compared to other existing target tracking methods. Full article
(This article belongs to the Special Issue Camera Calibration and 3D Reconstruction)
Show Figures

Figure 1

17 pages, 6082 KB  
Article
Research on Multi-View 3D Reconstruction Technology Based on SFM
by Lei Gao, Yingbao Zhao, Jingchang Han and Huixian Liu
Sensors 2022, 22(12), 4366; https://doi.org/10.3390/s22124366 - 9 Jun 2022
Cited by 47 | Viewed by 8072
Abstract
Multi-view 3D reconstruction technology is used to restore a 3D model of practical value or required objects from a group of images. This paper designs and implements a set of multi-view 3D reconstruction technology, adopts the fusion method of SIFT and SURF feature-point [...] Read more.
Multi-view 3D reconstruction technology is used to restore a 3D model of practical value or required objects from a group of images. This paper designs and implements a set of multi-view 3D reconstruction technology, adopts the fusion method of SIFT and SURF feature-point extraction results, increases the number of feature points, adds proportional constraints to improve the robustness of feature-point matching, and uses RANSAC to eliminate false matching. In the sparse reconstruction stage, the traditional incremental SFM algorithm takes a long time, but the accuracy is high; the traditional global SFM algorithm is fast, but its accuracy is low; aiming at the disadvantages of traditional SFM algorithm, this paper proposes a hybrid SFM algorithm, which avoids the problem of the long time consumption of incremental SFM and the problem of the low precision and poor robustness of global SFM; finally, the MVS algorithm of depth-map fusion is used to complete the dense reconstruction of objects, and the related algorithms are used to complete the surface reconstruction, which makes the reconstruction model more realistic. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

21 pages, 2235 KB  
Article
RTSDM: A Real-Time Semantic Dense Mapping System for UAVs
by Zhiteng Li, Jiannan Zhao, Xiang Zhou, Shengxian Wei, Pei Li and Feng Shuang
Machines 2022, 10(4), 285; https://doi.org/10.3390/machines10040285 - 18 Apr 2022
Cited by 14 | Viewed by 4891
Abstract
Intelligent drones or flying robots play a significant role in serving our society in applications such as rescue, inspection, agriculture, etc. Understanding the scene of the surroundings is an essential capability for further autonomous tasks. Intuitively, knowing the self-location of the UAV and [...] Read more.
Intelligent drones or flying robots play a significant role in serving our society in applications such as rescue, inspection, agriculture, etc. Understanding the scene of the surroundings is an essential capability for further autonomous tasks. Intuitively, knowing the self-location of the UAV and creating a semantic 3D map is significant for fully autonomous tasks. However, integrating simultaneous localization, 3D reconstruction, and semantic segmentation together is a huge challenge for power-limited systems such as UAVs. To address this, we propose a real-time semantic mapping system that can help a power-limited UAV system to understand its location and surroundings. The proposed approach includes a modified visual SLAM with the direct method to accelerate the computationally intensive feature matching process and a real-time semantic segmentation module at the back end. The semantic module runs a lightweight network, BiSeNetV2, and performs segmentation only at key frames from the front-end SLAM task. Considering fast navigation and the on-board memory resources, we provide a real-time dense-map-building module to generate an OctoMap with the segmented semantic map. The proposed system is verified in real-time experiments on a UAV platform with a Jetson TX2 as the computation unit. A frame rate of around 12 Hz, with a semantic segmentation accuracy of around 89% demonstrates that our proposed system is computationally efficient while providing sufficient information for fully autonomous tasks such as rescue, inspection, etc. Full article
(This article belongs to the Topic Motion Planning and Control for Robotics)
Show Figures

Figure 1

25 pages, 10939 KB  
Article
A Transformer-Based Coarse-to-Fine Wide-Swath SAR Image Registration Method under Weak Texture Conditions
by Yibo Fan, Feng Wang and Haipeng Wang
Remote Sens. 2022, 14(5), 1175; https://doi.org/10.3390/rs14051175 - 27 Feb 2022
Cited by 30 | Viewed by 5212
Abstract
As an all-weather and all-day remote sensing image data source, SAR (Synthetic Aperture Radar) images have been widely applied, and their registration accuracy has a direct impact on the downstream task effectiveness. The existing registration algorithms mainly focus on small sub-images, and there [...] Read more.
As an all-weather and all-day remote sensing image data source, SAR (Synthetic Aperture Radar) images have been widely applied, and their registration accuracy has a direct impact on the downstream task effectiveness. The existing registration algorithms mainly focus on small sub-images, and there is a lack of available accurate matching methods for large-size images. This paper proposes a high-precision, rapid, large-size SAR image dense-matching method. The method mainly includes four steps: down-sampling image pre-registration, sub-image acquisition, dense matching, and the transformation solution. First, the ORB (Oriented FAST and Rotated BRIEF) operator and the GMS (Grid-based Motion Statistics) method are combined to perform rough matching in the semantically rich down-sampled image. In addition, according to the feature point pairs, a group of clustering centers and corresponding images are obtained. Subsequently, a deep learning method based on Transformers is used to register images under weak texture conditions. Finally, the global transformation relationship can be obtained through RANSAC (Random Sample Consensus). Compared with the SOTA algorithm, our method’s correct matching point numbers are increased by more than 2.47 times, and the root mean squared error (RMSE) is reduced by more than 4.16%. The experimental results demonstrate that our proposed method is efficient and accurate, which provides a new idea for SAR image registration. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Figure 1

16 pages, 4849 KB  
Article
3D Texture Reconstruction of Abdominal Cavity Based on Monocular Vision SLAM for Minimally Invasive Surgery
by Haibin Wu, Ruotong Xu, Kaiyang Xu, Jianbo Zhao, Yan Zhang, Aili Wang and Yuji Iwahori
Symmetry 2022, 14(2), 185; https://doi.org/10.3390/sym14020185 - 18 Jan 2022
Cited by 11 | Viewed by 3919
Abstract
The depth information of abdominal tissue surface and the position of laparoscope are very important for accurate surgical navigation in computer-aided surgery. It is difficult to determine the lesion location by empirically matching the laparoscopic visual field with the preoperative image, which is [...] Read more.
The depth information of abdominal tissue surface and the position of laparoscope are very important for accurate surgical navigation in computer-aided surgery. It is difficult to determine the lesion location by empirically matching the laparoscopic visual field with the preoperative image, which is easy to cause intraoperative errors. Aiming at the complex abdominal environment, this paper constructs an improved monocular simultaneous localization and mapping (SLAM) system model, which can more accurately and truly reflect the abdominal cavity structure and spatial relationship. Firstly, in order to enhance the contrast between blood vessels and background, the contrast limited adaptive histogram equalization (CLAHE) algorithm is introduced to preprocess abdominal images. Secondly, combined with AKAZE algorithm, the Oriented FAST and Rotated BRIEF(ORB) algorithm is improved to extract the features of abdominal image, which improves the accuracy of extracted symmetry feature points pair and uses the RANSAC algorithm to quickly eliminate the majority of mis-matched pairs. The medical bag-of-words model is used to replace the traditional bag-of-words model to facilitate the comparison of similarity between abdominal images, which has stronger similarity calculation ability and reduces the matching time between the current abdominal image frame and the historical abdominal image frame. Finally, Poisson surface reconstruction is used to transform the point cloud into a triangular mesh surface, and the abdominal cavity texture image is superimposed on the 3D surface described by the mesh to generate the abdominal cavity inner wall texture. The surface of the abdominal cavity 3D model is smooth and has a strong sense of reality. The experimental results show that the improved SLAM system increases the registration accuracy of feature points and the densification, and the visual effect of dense point cloud reconstruction is more realistic for Hamlyn dataset. The 3D reconstruction technology creates a realistic model to identify the blood vessels, nerves and other tissues in the patient’s focal area, enabling three-dimensional visualization of the focal area, facilitating the surgeon’s observation and diagnosis, and digital simulation of the surgical operation to optimize the surgical plan. Full article
Show Figures

Figure 1

21 pages, 15027 KB  
Article
A Stereo Matching Method for 3D Image Measurement of Long-Distance Sea Surface
by Ying Yang and Cunwei Lu
J. Mar. Sci. Eng. 2021, 9(11), 1281; https://doi.org/10.3390/jmse9111281 - 17 Nov 2021
Cited by 6 | Viewed by 3764
Abstract
Tsunamis are some of the most destructive natural disasters. Some proposed tsunami measurement and arrival prediction systems use a limited number of instruments, then judge the occurrence of the tsunami, forecast its arrival time, location and scale. Since there are a limited number [...] Read more.
Tsunamis are some of the most destructive natural disasters. Some proposed tsunami measurement and arrival prediction systems use a limited number of instruments, then judge the occurrence of the tsunami, forecast its arrival time, location and scale. Since there are a limited number of measurement instruments, there is a possibility that large prediction errors will occur. In order to solve this problem, a long-distance tsunami measurement system based on the binocular stereo vision principle is proposed in this paper. The measuring range is 4–20 km away from the system deployment site. In this paper, we will focus on describing the stereo matching method for the proposed system. This paper proposes a two-step matching method. It first performs fast sparse matching, and then complete high precision dense matching based on the results of the sparse matching. A matching descriptor based on the physical features of sea waves is proposed to solve the matching difficulty caused by the similarity of sea surface image textures. The relationship between disparity and the y coordinate is built to reduce the matching search range. Experiments were conducted on sea surface images with different shooting times and distances; the results verify the effectiveness of the presented method. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Graphical abstract

19 pages, 7552 KB  
Article
Crater Detection and Recognition Method for Pose Estimation
by Zihao Chen and Jie Jiang
Remote Sens. 2021, 13(17), 3467; https://doi.org/10.3390/rs13173467 - 1 Sep 2021
Cited by 17 | Viewed by 5354
Abstract
A crater detection and recognition algorithm is the key to pose estimation based on craters. Due to the changing viewing angle and varying height, the crater is imaged as an ellipse and the scale changes in the landing camera. In this paper, a [...] Read more.
A crater detection and recognition algorithm is the key to pose estimation based on craters. Due to the changing viewing angle and varying height, the crater is imaged as an ellipse and the scale changes in the landing camera. In this paper, a robust and efficient crater detection and recognition algorithm for fusing the information of sequence images for pose estimation is designed, which can be used in both flying in orbit around and landing phases. Our method consists of two stages: stage 1 for crater detection and stage 2 for crater recognition. In stage 1, a single-stage network with dense anchor points (dense point crater detection network, DPCDN) is conducive to dealing with multi-scale craters, especially small and dense crater scenes. The fast feature-extraction layer (FEL) of the network improves detection speed and reduces network parameters without losing accuracy. We comprehensively evaluate this method and present state-of-art detection performance on a Mars crater dataset. In stage 2, taking the encoded features and intersection over union (IOU) of craters as weights, we solve the weighted bipartite graph matching problem, which is matching craters in the image with the previously identified craters and the pre-established craters database. The former is called “frame-frame match”, or FFM, and the latter is called “frame-database match”, or FDM. Combining the FFM with FDM, the recognition speed is enabled to achieve real-time on the CPU (25 FPS) and the average recognition precision is 98.5%. Finally, the recognition result is used to estimate the pose using the perspective-n-point (PnP) algorithm and results show that the root mean square error (RMSE) of trajectories is less than 10 m and the angle error is less than 1.5 degrees. Full article
(This article belongs to the Special Issue Cartography of the Solar System: Remote Sensing beyond Earth)
Show Figures

Graphical abstract

23 pages, 8346 KB  
Article
Dense Robust 3D Reconstruction and Measurement for 3D Printing Process Based on Vision
by Ning Lv, Chengyu Wang, Yujing Qiao and Yongde Zhang
Appl. Sci. 2021, 11(17), 7961; https://doi.org/10.3390/app11177961 - 28 Aug 2021
Cited by 6 | Viewed by 3406
Abstract
The 3D printing process lacks real-time inspection, which is still an open-loop manufacturing process, and the molding accuracy is low. Based on the 3D reconstruction theory of machine vision, in order to meet the applicability requirements of 3D printing process detection, a matching [...] Read more.
The 3D printing process lacks real-time inspection, which is still an open-loop manufacturing process, and the molding accuracy is low. Based on the 3D reconstruction theory of machine vision, in order to meet the applicability requirements of 3D printing process detection, a matching fusion method is proposed. The fast nearest neighbor (FNN) method is used to search matching point pairs. The matching point information of FFT-SIFT algorithm based on fast Fourier transform is superimposed with the matching point information of AKAZE algorithm, and then fused to obtain more dense feature point matching information and rich edge feature information. Combining incremental SFM algorithm with global SFM algorithm, an integrated SFM sparse point cloud reconstruction method is developed. The dense point cloud is reconstructed by PMVs algorithm, the point cloud model is meshed by Delaunay triangulation, and then the accurate 3D reconstruction model is obtained by texture mapping. The experimental results show that compared with the classical SIFT algorithm, the speed of feature extraction is increased by 25.0%, the number of feature matching is increased by 72%, and the relative error of 3D reconstruction results is about 0.014%, which is close to the theoretical error. Full article
(This article belongs to the Topic Additive Manufacturing)
Show Figures

Figure 1

14 pages, 3870 KB  
Article
Age-Related Changes in the Primary Motor Cortex of Newborn to Adult Domestic Pig Sus scrofa domesticus
by Salvatore Desantis, Serena Minervini, Lorenzo Zallocco, Bruno Cozzi and Andrea Pirone
Animals 2021, 11(7), 2019; https://doi.org/10.3390/ani11072019 - 6 Jul 2021
Cited by 6 | Viewed by 6479
Abstract
The pig has been increasingly used as a suitable animal model in translational neuroscience. However, several features of the fast-growing, immediately motor-competent cerebral cortex of this species have been adequately described. This study analyzes the cytoarchitecture of the primary motor cortex (M1) of [...] Read more.
The pig has been increasingly used as a suitable animal model in translational neuroscience. However, several features of the fast-growing, immediately motor-competent cerebral cortex of this species have been adequately described. This study analyzes the cytoarchitecture of the primary motor cortex (M1) of newborn, young and adult pigs (Sus scrofa domesticus). Moreover, we investigated the distribution of the neural cells expressing the calcium-binding proteins (CaBPs) (calretinin, CR; parvalbumin, PV) throughout M1. The primary motor cortex of newborn piglets was characterized by a dense neuronal arrangement that made the discrimination of the cell layers difficult, except for layer one. The absence of a clearly recognizable layer four, typical of the agranular cortex, was noted in young and adult pigs. The morphometric and immunohistochemical analyses revealed age-associated changes characterized by (1) thickness increase and neuronal density (number of cells/mm2 of M1) reduction during the first year of life; (2) morphological changes of CR-immunoreactive neurons in the first months of life; (3) higher density of CR- and PV-immunopositive neurons in newborns when compared to young and adult pigs. Since most of the present findings match with those of the human M1, this study strengthens the growing evidence that the brain of the pig can be used as a potentially valuable translational animal model during growth and development. Full article
Show Figures

Figure 1

16 pages, 2690 KB  
Article
A Densely Connected GRU Neural Network Based on Coattention Mechanism for Chinese Rice-Related Question Similarity Matching
by Haoriqin Wang, Huaji Zhu, Huarui Wu, Xiaomin Wang, Xiao Han and Tongyu Xu
Agronomy 2021, 11(7), 1307; https://doi.org/10.3390/agronomy11071307 - 27 Jun 2021
Cited by 16 | Viewed by 3099
Abstract
In the question-and-answer (Q&A) communities of the “China Agricultural Technology Extension Information Platform”, thousands of rice-related Chinese questions are newly added every day. The rapid detection of the same semantic question is the key to the success of a rice-related intelligent Q&A system. [...] Read more.
In the question-and-answer (Q&A) communities of the “China Agricultural Technology Extension Information Platform”, thousands of rice-related Chinese questions are newly added every day. The rapid detection of the same semantic question is the key to the success of a rice-related intelligent Q&A system. To allow the fast and automatic detection of the same semantic rice-related questions, we propose a new method based on the Coattention-DenseGRU (Gated Recurrent Unit). According to the rice-related question characteristics, we applied word2vec with the TF-IDF (Term Frequency–Inverse Document Frequency) method to process and analyze the text data and compare it with the Word2vec, GloVe, and TF-IDF methods. Combined with the agricultural word segmentation dictionary, we applied Word2vec with the TF-IDF method, effectively solving the problem of high dimension and sparse data in the rice-related text. Each network layer employed the connection information of features and all previous recursive layers’ hidden features. To alleviate the problem of feature vector size increasing due to dense splicing, an autoencoder was used after dense concatenation. The experimental results show that rice-related question similarity matching based on Coattention-DenseGRU can improve the utilization of text features, reduce the loss of features, and achieve fast and accurate similarity matching of the rice-related question dataset. The precision and F1 values of the proposed model were 96.3% and 96.9%, respectively. Compared with seven other kinds of question similarity matching models, we present a new state-of-the-art method with our rice-related question dataset. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

Back to TopTop