Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = aliasing mitigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 7750 KiB  
Article
Pyramidal Predictive Network V2: An Improved Predictive Architecture and Training Strategies for Future Perception Prediction
by Chaofan Ling, Junpei Zhong, Weihua Li, Ran Dong and Mingjun Dai
Big Data Cogn. Comput. 2025, 9(4), 79; https://doi.org/10.3390/bdcc9040079 - 28 Mar 2025
Viewed by 217
Abstract
In this paper, we propose an improved version of the Pyramidal Predictive Network (PPNV2), a theoretical framework inspired by predictive coding, which addresses the limitations of its predecessor (PPNV1) in the task of future perception prediction. While PPNV1 employed a temporal pyramid architecture [...] Read more.
In this paper, we propose an improved version of the Pyramidal Predictive Network (PPNV2), a theoretical framework inspired by predictive coding, which addresses the limitations of its predecessor (PPNV1) in the task of future perception prediction. While PPNV1 employed a temporal pyramid architecture and demonstrated promising results, its innate signal processing led to aliasing in the prediction, restricting its application in robotic navigation. We analyze the signal dissemination and characteristic artifacts of PPNV1 and introduce architectural enhancements and training strategies to mitigate these issues. The improved architecture focuses on optimizing information dissemination and reducing aliasing in neural networks. We redesign the downsampling and upsampling components to enable the network to construct images more effectively from low-frequency-input Fourier features, replacing the simple concatenation of different inputs in the previous version. Furthermore, we refine the training strategies to alleviate input inconsistency during training and testing phases. The enhanced model exhibits increased interpretability, stronger prediction accuracy, and improved quality of predictions. The proposed PPNV2 offers a more robust and efficient approach to future video-frame prediction, overcoming the limitations of its predecessor and expanding its potential applications in various robotic domains, including pedestrian prediction, vehicle prediction, and navigation. Full article
Show Figures

Figure 1

15 pages, 24707 KiB  
Article
Anti-Aliasing and Anti-Leakage Frequency–Wavenumber Filtering Method for Linear Noise Suppression in Irregular Coarse Seismic Data
by Shengqiang Mu, Liang Huang, Liying Ren, Guoxu Shu and Xueliang Li
Minerals 2025, 15(2), 107; https://doi.org/10.3390/min15020107 - 23 Jan 2025
Viewed by 845
Abstract
Linear noise, a significant type of interference in exploration seismic data, adversely affects the signal-to-noise ratio (SNR) and imaging resolution. As seismic exploration advances, the constraints of the acquisition environment hinder the ability to acquire seismic data in a regular and dense manner, [...] Read more.
Linear noise, a significant type of interference in exploration seismic data, adversely affects the signal-to-noise ratio (SNR) and imaging resolution. As seismic exploration advances, the constraints of the acquisition environment hinder the ability to acquire seismic data in a regular and dense manner, complicating the suppression of linear noise. To address this challenge, we have developed an anti-aliasing and anti-leakage frequency–wavenumber (f-k) filtering method. This approach effectively mitigates issues of spatial aliasing and spectral leakage caused by irregular coarse data acquisition by integrating linear moveout correction and anti-leakage Fourier transform into traditional f-k filtering. The efficacy of our method was demonstrated through examples of linear noise suppression on both irregular coarse synthetic data and field seismic data. Full article
(This article belongs to the Special Issue Seismics in Mineral Exploration)
Show Figures

Figure 1

13 pages, 3888 KiB  
Article
LSTM Short-Term Wind Power Prediction Method Based on Data Preprocessing and Variational Modal Decomposition for Soft Sensors
by Peng Lei, Fanglan Ma, Changsheng Zhu and Tianyu Li
Sensors 2024, 24(8), 2521; https://doi.org/10.3390/s24082521 - 15 Apr 2024
Cited by 9 | Viewed by 1938
Abstract
Soft sensors have been extensively utilized to approximate real-time power prediction in wind power generation, which is challenging to measure instantaneously. The short-term forecast of wind power aims at providing a reference for the dispatch of the intraday power grid. This study proposes [...] Read more.
Soft sensors have been extensively utilized to approximate real-time power prediction in wind power generation, which is challenging to measure instantaneously. The short-term forecast of wind power aims at providing a reference for the dispatch of the intraday power grid. This study proposes a soft sensor model based on the Long Short-Term Memory (LSTM) network by combining data preprocessing with Variational Modal Decomposition (VMD) to improve wind power prediction accuracy. It does so by adopting the isolation forest algorithm for anomaly detection of the original wind power series and processing the missing data by multiple imputation. Based on the process data samples, VMD technology is used to achieve power data decomposition and noise reduction. The LSTM network is introduced to predict each modal component separately, and further sum reconstructs the prediction results of each component to complete the wind power prediction. From the experimental results, it can be seen that the LSTM network which uses an Adam optimizing algorithm has better convergence accuracy. The VMD method exhibited superior decomposition outcomes due to its inherent Wiener filter capabilities, which effectively mitigate noise and forestall modal aliasing. The Mean Absolute Percentage Error (MAPE) was reduced by 9.3508%, which indicates that the LSTM network combined with the VMD method has better prediction accuracy. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

20 pages, 3313 KiB  
Article
Improved Methods for Fourier-Based Microwave Imaging
by Yuri Alvarez López and Fernando Las-Heras Andrés
Sensors 2023, 23(22), 9250; https://doi.org/10.3390/s23229250 - 17 Nov 2023
Cited by 1 | Viewed by 1337
Abstract
Fourier-based imaging has been widely adopted for microwave imaging thanks to its efficiency in terms of computational complexity without compromising image resolution. Together with other backpropagation imaging algorithms like delay-and-sum (DAS), they are based on a far-field approach to the electromagnetic expression relating [...] Read more.
Fourier-based imaging has been widely adopted for microwave imaging thanks to its efficiency in terms of computational complexity without compromising image resolution. Together with other backpropagation imaging algorithms like delay-and-sum (DAS), they are based on a far-field approach to the electromagnetic expression relating to fields and sources. To improve the accuracy of these techniques, this contribution presents a modified version of the well-known Fourier-based algorithm by taking into account the field radiated by the Tx/Rx antennas of the microwave imaging system. The impact on the imaged targets is discussed, providing a quantitative and qualitative analysis. The performance of the proposed method for subsampled microwave imaging scenarios is compared against other well-known aliasing mitigation methods. Full article
(This article belongs to the Special Issue Recent Advancements in Radar Imaging and Sensing Technology II)
Show Figures

Graphical abstract

30 pages, 38046 KiB  
Article
MosReformer: Reconstruction and Separation of Multiple Moving Targets for Staggered SAR Imaging
by Xin Qi, Yun Zhang, Yicheng Jiang, Zitao Liu and Chang Yang
Remote Sens. 2023, 15(20), 4911; https://doi.org/10.3390/rs15204911 - 11 Oct 2023
Cited by 1 | Viewed by 1350
Abstract
Maritime moving target imaging using synthetic aperture radar (SAR) demands high resolution and wide swath (HRWS). Using the variable pulse repetition interval (PRI), staggered SAR can achieve seamless HRWS imaging. The reconstruction should be performed since the variable PRI causes echo pulse loss [...] Read more.
Maritime moving target imaging using synthetic aperture radar (SAR) demands high resolution and wide swath (HRWS). Using the variable pulse repetition interval (PRI), staggered SAR can achieve seamless HRWS imaging. The reconstruction should be performed since the variable PRI causes echo pulse loss and nonuniformly sampled signals in azimuth, both of which result in spectrum aliasing. The existing reconstruction methods are designed for stationary scenes and have achieved impressive results. However, for moving targets, these methods inevitably introduce reconstruction errors. The target motion coupled with non-uniform sampling aggravates the spectral aliasing and degrades the reconstruction performance. This phenomenon becomes more severe, particularly in scenes involving multiple moving targets, since the distinct motion parameter has its unique effect on spectrum aliasing, resulting in the overlapping of various aliasing effects. Consequently, it becomes difficult to reconstruct and separate the echoes of the multiple moving targets with high precision in staggered mode. To this end, motivated by deep learning, this paper proposes a novel Transformer-based algorithm to image multiple moving targets in a staggered SAR system. The reconstruction and the separation of the multiple moving targets are achieved through a proposed network named MosReFormer (Multiple moving target separation and reconstruction Transformer). Adopting a gated single-head Transformer network with convolution-augmented joint self-attention, the proposed MosReFormer network can mitigate the reconstruction errors and separate the signals of multiple moving targets simultaneously. Simulations and experiments on raw data show that the reconstructed and separated results are close to ideal imaging results which are sampled uniformly in azimuth with constant PRI, verifying the feasibility and effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Graphical abstract

19 pages, 5937 KiB  
Article
Cross-Domain Open Set Fault Diagnosis Based on Weighted Domain Adaptation with Double Classifiers
by Huaqing Wang, Zhitao Xu, Xingwei Tong and Liuyang Song
Sensors 2023, 23(4), 2137; https://doi.org/10.3390/s23042137 - 14 Feb 2023
Cited by 6 | Viewed by 2723
Abstract
The application of transfer learning in fault diagnosis has been developed in recent years. It can use existing data to solve the problem of fault recognition under different working conditions. Due to the complexity of the equipment and the openness of the working [...] Read more.
The application of transfer learning in fault diagnosis has been developed in recent years. It can use existing data to solve the problem of fault recognition under different working conditions. Due to the complexity of the equipment and the openness of the working environment in industrial production, the status of the equipment is changeable, and the collected signals can have new fault classes. Therefore, the open set recognition ability of the transfer learning method is an urgent research direction. The existing transfer learning model can have a severe negative transfer problem when solving the open set problem, resulting in the aliasing of samples in the feature space and the inability to separate the unknown classes. To solve this problem, we propose a Weighted Domain Adaptation with Double Classifiers (WDADC) method. Specifically, WDADC designs the weighting module based on Jensen–Shannon divergence, which can evaluate the similarity between each sample in the target domain and each class in the source domain. Based on this similarity, a weighted loss is constructed to promote the positive transfer between shared classes in the two domains to realize the recognition of shared classes and the separation of unknown classes. In addition, the structure of double classifiers in WDADC can mitigate the overfitting of the model by maximizing the discrepancy, which helps extract the domain-invariant and class-separable features of the samples when the discrepancy between the two domains is large. The model’s performance is verified in several fault datasets of rotating machinery. The results show that the method is effective in open set fault diagnosis and superior to the common domain adaptation methods. Full article
(This article belongs to the Special Issue Sensors for Machinery Condition Monitoring and Diagnosis)
Show Figures

Figure 1

18 pages, 14788 KiB  
Article
Multi-Scale Residual Aggregation Feature Pyramid Network for Object Detection
by Hongyang Wang and Tiejun Wang
Electronics 2023, 12(1), 93; https://doi.org/10.3390/electronics12010093 - 26 Dec 2022
Cited by 6 | Viewed by 2547
Abstract
The effective use of multi-scale features remains an open problem for object detection tasks. Recently, proposed object detectors have usually used Feature Pyramid Networks (FPN) to fuse multi-scale features. Since Feature Pyramid Networks use a relatively simple feature map fusion approach, it can [...] Read more.
The effective use of multi-scale features remains an open problem for object detection tasks. Recently, proposed object detectors have usually used Feature Pyramid Networks (FPN) to fuse multi-scale features. Since Feature Pyramid Networks use a relatively simple feature map fusion approach, it can lead to the loss or misalignment of semantic information in the fusion process. Several works have demonstrated that using a bottom-up structure in a Feature Pyramid Network can shorten the information path between lower layers and the topmost feature, allowing an adequate exchange of semantic information from different layers. We further enhance the bottom-up path by proposing a multi-scale residual aggregation Feature Pyramid Network (MSRA-FPN), which uses a unidirectional cross-layer residual module to aggregate features from multiple layers bottom-up in a triangular structure to the topmost layer. In addition, we introduce a Residual Squeeze and Excitation Module to mitigate the aliasing effects that occur when features from different layers are aggregated. MSRA-FPN enhances the semantic information of the high-level feature maps, mitigates the information decay during feature fusion, and enhances the detection capability of the model for large objects. It is experimentally demonstrated that our proposed MSRA-FPN improves the performance of the three baseline models by 0.5–1.9% on the PASCAL VOC dataset and is also quite competitive with other state-of-the-art FPN methods. On the MS COCO dataset, our proposed method can also improve the performance of the baseline model by 0.8% and the baseline model’s performance for large object detection by 1.8%. To further validate the effectiveness of MSRA-FPN for large object detection, we constructed the Thangka Figure Dataset and conducted comparative experiments. It is experimentally demonstrated that our proposed method improves the performance of the baseline model by 2.9–4.7% on this dataset and can reach up to 71.2%. Full article
Show Figures

Figure 1

43 pages, 9861 KiB  
Article
Hybrid Electrostatic–Atomic Accelerometer for Future Space Gravity Missions
by Nassim Zahzam, Bruno Christophe, Vincent Lebat, Emilie Hardy, Phuong-Anh Huynh, Noémie Marquet, Cédric Blanchard, Yannick Bidel, Alexandre Bresson, Petro Abrykosov, Thomas Gruber, Roland Pail, Ilias Daras and Olivier Carraz
Remote Sens. 2022, 14(14), 3273; https://doi.org/10.3390/rs14143273 - 7 Jul 2022
Cited by 22 | Viewed by 3619
Abstract
Long-term observation of Earth’s temporal gravity field with enhanced temporal and spatial resolution is a major objective for future satellite gravity missions. Improving the performance of the accelerometers present in such missions is one of the main paths to explore. In this context, [...] Read more.
Long-term observation of Earth’s temporal gravity field with enhanced temporal and spatial resolution is a major objective for future satellite gravity missions. Improving the performance of the accelerometers present in such missions is one of the main paths to explore. In this context, we propose to study an original concept of a hybrid accelerometer associating a state-of-the-art electrostatic accelerometer (EA) and a promising quantum sensor based on cold atom interferometry. To assess the performance potential of such an instrument, numerical simulations were performed to determine its impact in terms of gravity field retrieval. Taking advantage of the long-term stability of the cold atom interferometer (CAI), it is shown that the reduced drift of the hybrid sensor could lead to improved gravity field retrieval. Nevertheless, this gain vanishes once temporal variations of the gravity field and related aliasing effects are taken into account. Improved de-aliasing models or some specific satellite constellations are then required to maximize the impact of the accelerometer performance gain. To evaluate the achievable acceleration performance in-orbit, a numerical simulator of the hybrid accelerometer was developed and preliminary results are given. The instrument simulator was in part validated by reproducing the performance achieved with a hybrid lab prototype operating on the ground. The problem of satellite rotation impact on the CAI was also investigated both with instrument performance simulations and experimental demonstrations. It is shown that the proposed configuration, where the EA’s proof-mass acts as the reference mirror for the CAI, seems a promising approach to allow the mitigation of satellite rotation. To evaluate the feasibility of such an instrument for space applications, a preliminary design is elaborated along with a preliminary error, mass, volume, and electrical power consumption budget. Full article
Show Figures

Graphical abstract

19 pages, 1465 KiB  
Article
Weighted Hybrid Feature Reduction Embedded with Ensemble Learning for Speech Data of Parkinson’s Disease
by Zeeshan Hameed, Waheed Ur Rehman, Wakeel Khan, Nasim Ullah and Fahad R. Albogamy
Mathematics 2021, 9(24), 3172; https://doi.org/10.3390/math9243172 - 9 Dec 2021
Cited by 3 | Viewed by 2670
Abstract
Parkinson’s disease (PD) is a progressive and long-term neurodegenerative disorder of the central nervous system. It has been studied that 90% of the PD subjects have voice impairments which are some of the vital characteristics of PD patients and have been widely used [...] Read more.
Parkinson’s disease (PD) is a progressive and long-term neurodegenerative disorder of the central nervous system. It has been studied that 90% of the PD subjects have voice impairments which are some of the vital characteristics of PD patients and have been widely used for diagnostic purposes. However, the curse of dimensionality, high aliasing, redundancy, and small sample size in PD speech data bring great challenges to classify PD objects. Feature reduction can efficiently solve these issues. However, existing feature reduction algorithms ignore high aliasing, noise, and the stability of algorithms, and thus fail to give substantial classification accuracy. To mitigate these problems, this study proposes a weighted hybrid feature reduction embedded with ensemble learning technique which comprises (1) hybrid feature reduction technique that increases inter-class variance, reduces intra-class variance, preserves the neighborhood structure of data, and remove co-related features that causes high aliasing and noise in classification. (2) Weighted-boosting method to train the model precisely. (3) Furthermore, the stability of the algorithm is enhanced by introducing a bagging strategy. The experiments were performed on three different datasets including two widely used datasets and a dataset provided by Southwest Hospital (Army Military Medical University) Chongqing, China. The experimental results indicated that compared with existing feature reduction methods, the proposed algorithm always shows the highest accuracy, precision, recall, and G-mean for speech data of PD. Moreover, the proposed algorithm not only shows excellent performance for classification but also deals with imbalanced data precisely and achieved the highest AUC in most of the cases. In addition, compared with state-of-the-art algorithms, the proposed method shows improvement up to 4.53%. In the future, this algorithm can be used for early and differential diagnoses, which are rated as challenging tasks. Full article
Show Figures

Figure 1

13 pages, 554 KiB  
Article
Antiderivative Antialiasing for Stateful Systems
by Martin Holters
Appl. Sci. 2020, 10(1), 20; https://doi.org/10.3390/app10010020 - 18 Dec 2019
Cited by 11 | Viewed by 3211
Abstract
Nonlinear systems, such as guitar distortion effects, play an important role in musical signal processing. One major problem encountered in digital nonlinear systems is aliasing distortion. Consequently, various aliasing reduction methods have been proposed in the literature. One of these is based on [...] Read more.
Nonlinear systems, such as guitar distortion effects, play an important role in musical signal processing. One major problem encountered in digital nonlinear systems is aliasing distortion. Consequently, various aliasing reduction methods have been proposed in the literature. One of these is based on using the antiderivative of the nonlinearity and has proven effective, but is limited to memoryless systems. In this work, it is extended to a class of stateful systems which includes but is not limited to systems with a single one-port nonlinearity. Two examples from the realm of virtual analog modeling show its applicability to and effectiveness for commonly encountered guitar distortion effect circuits. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

19 pages, 742 KiB  
Article
Subsurface Detection of Shallow Targets by Undersampled Multifrequency Data and a Non-Cooperative Source
by Adriana Brancaccio, Angela Dell’Aversano, Giovanni Leone and Raffaele Solimene
Appl. Sci. 2019, 9(24), 5383; https://doi.org/10.3390/app9245383 - 9 Dec 2019
Cited by 12 | Viewed by 2273
Abstract
Imaging buried objects embedded within electrically large investigation domains can require a large number of measurement points. This is impractical if long data acquisition time cannot be tolerated or the system is conceived to work at some stand-off distance from the air/soil interface; [...] Read more.
Imaging buried objects embedded within electrically large investigation domains can require a large number of measurement points. This is impractical if long data acquisition time cannot be tolerated or the system is conceived to work at some stand-off distance from the air/soil interface; for example, if it is mounted over some flying platform. In order to reduce the number of spatial measurements, here, we propose a method for detecting and localizing shallowly buried scattering targets from under-sampled far-field data. The method is based on a scattering model derived from the equivalence theorem for electromagnetic radiation. It exploits multi-frequency data and does not require that the transmitter and receivers are synchronized, making the source non-cooperative. To provide a benchmark against which spatial data have to be reduced, first, the number of required spatial measurements is examined by analyzing the properties of the relevant scattering operator. Then, since under-sampling data produces aliasing artifacts, frequency diversity (i.e., multi-frequency data) is exploited to mitigate those artifacts. In particular, single-frequency reconstructions are properly fused and a criterion for selecting the frequencies to be used is provided. Numerical examples show that the method allows for satisfactory target transverse localization with a number of measurements that are much less than the ones required by other methods commonly used in subsurface imaging. Full article
(This article belongs to the Special Issue Computer Methods for Direct and Inverse Modelling and Simulation)
Show Figures

Figure 1

21 pages, 12122 KiB  
Article
Algorithms for Doppler Spectral Density Data Quality Control and Merging for the Ka-Band Solid-State Transmitter Cloud Radar
by Liping Liu and Jiafeng Zheng
Remote Sens. 2019, 11(2), 209; https://doi.org/10.3390/rs11020209 - 21 Jan 2019
Cited by 17 | Viewed by 5290
Abstract
The Chinese Ka-band solid-state transmitter cloud radar (CR) can operate in three different work modes with different pulse widths and coherent integration and non-coherent integration numbers to meet the requirement for long-term cloud measurements. The CR was used to observe cloud and precipitation [...] Read more.
The Chinese Ka-band solid-state transmitter cloud radar (CR) can operate in three different work modes with different pulse widths and coherent integration and non-coherent integration numbers to meet the requirement for long-term cloud measurements. The CR was used to observe cloud and precipitation data in southern China in 2016. In order to resolve the data quality problems caused by coherent integration and pulse compression, which are used to detect weak cloud in the cloud radar, this study focuses on analyzing the consistencies of reflectivity spectra using the three modes and the influence of coherent integration and pulse compression, developing an algorithm for Doppler spectral density data quality control (QC) and merging based on multiple-mode observation data. After dealiasing Doppler velocity and artefact removal, the three types of Doppler spectral density data were merged. Then, Doppler moments such as reflectivity, radial velocity, and spectral width were recalculated from the merged reflectivity spectra. Performance of the merging algorithm was evaluated. Three conclusions were drawn. Firstly, four rounds of coherent integration with a pulse repetition frequency (PRF) of 8333 Hz underestimated the reflectivity spectra for Doppler velocities exceeding 2 m·s−1, causing a large negative bias in the reflectivity and radial velocity when large drops were present. In contrast, two rounds of coherent integration affected the reflectivity spectra to a lesser extent. The reflectivity spectra were underestimated for low signal-to-noise ratios in the low-sensitivity mode. Secondly, pulse compression improved the radar sensitivity and air vertical speed observation, whereas the precipitation mode and coherent integration led to an underestimation of the number concentration of big raindrops and an overestimation of the number concentration of small drops. Thirdly, a comparison of the individual spectra with the merged reflectivity spectra showed that the Doppler moments filled in the gaps in the individual spectra during weak cloud periods, reduced the effects of coherent integration and pulse compression in liquid precipitation, mitigated the aliasing of Doppler velocity, and removed the artefacts, yielding a comprehensive and accurate depiction of most of the clouds and precipitation in the vertical column above the radar. The recalculated moments of the Doppler spectra had better quality than those merged from raw data. Full article
(This article belongs to the Special Issue Remote Sensing of Clouds)
Show Figures

Graphical abstract

22 pages, 13188 KiB  
Article
Early Fault Detection Method for Rotating Machinery Based on Harmonic-Assisted Multivariate Empirical Mode Decomposition and Transfer Entropy
by Zhe Wu, Qiang Zhang, Lixin Wang, Lifeng Cheng and Jingbo Zhou
Entropy 2018, 20(11), 873; https://doi.org/10.3390/e20110873 - 13 Nov 2018
Cited by 10 | Viewed by 4097
Abstract
It is a difficult task to analyze the coupling characteristics of rotating machinery fault signals under the influence of complex and nonlinear interference signals. This difficulty is due to the strong noise background of rotating machinery fault feature extraction and weaknesses, such as [...] Read more.
It is a difficult task to analyze the coupling characteristics of rotating machinery fault signals under the influence of complex and nonlinear interference signals. This difficulty is due to the strong noise background of rotating machinery fault feature extraction and weaknesses, such as modal mixing problems, in the existing Ensemble Empirical Mode Decomposition (EEMD) time–frequency analysis methods. To quantitatively study the nonlinear synchronous coupling characteristics and information transfer characteristics of rotating machinery fault signals between different frequency scales under the influence of complex and nonlinear interference signals, a new nonlinear signal processing method—the harmonic assisted multivariate empirical mode decomposition method (HA-MEMD)—is proposed in this paper. By adding additional high-frequency harmonic-assisted channels and reducing them, the decomposing precision of the Intrinsic Mode Function (IMF) can be effectively improved, and the phenomenon of mode aliasing can be mitigated. Analysis results of the simulated signals prove the effectiveness of this method. By combining HA-MEMD with the transfer entropy algorithm and introducing signal processing of the rotating machinery, a fault detection method of rotating machinery based on high-frequency harmonic-assisted multivariate empirical mode decomposition-transfer entropy (HA-MEMD-TE) was established. The main features of the mechanical transmission system were extracted by the high-frequency harmonic-assisted multivariate empirical mode decomposition method, and the signal, after noise reduction, was used for the transfer entropy calculation. The evaluation index of the rotating machinery state based on HA-MEMD-TE was established to quantitatively describe the degree of nonlinear coupling between signals to effectively evaluate and diagnose the operating state of the mechanical system. By adding noise to different signal-to-noise ratios, the fault detection ability of HA-MEMD-TE method in the background of strong noise is investigated, which proves that the method has strong reliability and robustness. In this paper, transfer entropy is applied to the fault diagnosis field of rotating machinery, which provides a new effective method for early fault diagnosis and performance degradation-state recognition of rotating machinery, and leads to relevant research conclusions. Full article
Show Figures

Figure 1

20 pages, 9080 KiB  
Article
Coupled-Region Visual Tracking Formulation Based on a Discriminative Correlation Filter Bank
by Jian Wei and Feng Liu
Electronics 2018, 7(10), 244; https://doi.org/10.3390/electronics7100244 - 11 Oct 2018
Cited by 2 | Viewed by 2814
Abstract
The visual tracking algorithm based on discriminative correlation filter (DCF) has shown excellent performance in recent years, especially as the higher tracking speed meets the real-time requirement of object tracking. However, when the target is partially occluded, the traditional single discriminative correlation filter [...] Read more.
The visual tracking algorithm based on discriminative correlation filter (DCF) has shown excellent performance in recent years, especially as the higher tracking speed meets the real-time requirement of object tracking. However, when the target is partially occluded, the traditional single discriminative correlation filter will not be able to effectively learn information reliability, resulting in tracker drift and even failure. To address this issue, this paper proposes a novel tracking-by-detection framework, which uses multiple discriminative correlation filters called discriminative correlation filter bank (DCFB), corresponding to different target sub-regions and global region patches to combine and optimize the final correlation output in the frequency domain. In tracking, the sub-region patches are zero-padded to the same size as the global target region, which can effectively avoid noise aliasing during correlation operation, thereby improving the robustness of the discriminative correlation filter. Considering that the sub-region target motion model is constrained by the global target region, adding the global region appearance model to our framework will completely preserve the intrinsic structure of the target, thus effectively utilizing the discriminative information of the visible sub-region to mitigate tracker drift when partial occlusion occurs. In addition, an adaptive scale estimation scheme is incorporated into our algorithm to make the tracker more robust against potential challenging attributes. The experimental results from the OTB-2015 and VOT-2015 datasets demonstrate that our method performs favorably compared with several state-of-the-art trackers. Full article
Show Figures

Figure 1

10 pages, 3561 KiB  
Article
Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data
by Na Wei and Rongxin Fang
Sensors 2016, 16(5), 679; https://doi.org/10.3390/s16050679 - 11 May 2016
Cited by 4 | Viewed by 3854
Abstract
With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the [...] Read more.
With sparse and uneven site distribution, Global Positioning System (GPS) data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP) with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM) approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Back to TopTop