Signals doi: 10.3390/signals5010008
Authors: Pierre-Etienne Martin
The ApeTI dataset was built with the aim of retrieving physiological signals such as heart rate, breath rate, and cognitive load from thermal images of great apes. We want to develop computer vision tools that psychologists and animal behavior researchers can use to retrieve physiological signals noninvasively. Our goal is to increase the use of a thermal imaging modality in the community and avoid using more invasive recording methods to answer research questions. The first step to retrieving physiological signals from thermal imaging is their spatial segmentation to then analyze the time series of the regions of interest. For this purpose, we present a thermal imaging dataset based on recordings of chimpanzees with their face and nose annotated using a bounding box and nine landmarks. The face and landmarks’ locations can then be used to extract physiological signals. The dataset was acquired using a thermal camera at the Leipzig Zoo. Juice was provided in the vicinity of the camera to encourage the chimpanzee to approach and have a good view of the face. Several computer vision methods are presented and evaluated on this dataset. We reach mAPs of 0.74 for face detection and 0.98 for landmark estimation using our proposed combination of the Tifa and Tina models inspired by the HRNet models. A proof of concept of the model is presented for physiological signal retrieval but requires further investigation to be evaluated. The dataset and the implementation of the Tina and Tifa models are available to the scientific community for performance comparison or further applications.
]]>Signals doi: 10.3390/signals5010007
Authors: Harry T. Mason Astrid Priscilla Martinez-Cedillo Quoc C. Vuong Maria Carmen Garcia-de-Soria Stephen Smith Elena Geangu Marina I. Knight
Infant electrocardiograms (ECGs) and heart rates (HRs) are very useful biosignals for psychological research and clinical work, but can be hard to analyse properly, particularly longform (≥5 min) recordings taken in naturalistic environments. Infant HRs are typically much faster than adult HRs, and so some of the underlying frequency assumptions made about adult ECGs may not hold for infants. However, the bulk of publicly available ECG approaches focus on adult data. Here, existing open source ECG approaches are tested on infant datasets. The best-performing open source method is then modified to maximise its performance on infant data (e.g., including a 15 Hz high-pass filter, adding local peak correction). The HR signal is then subsequently analysed, developing an approach for cleaning data with separate sets of parameters for the analysis of cleaner and noisier HRs. A Signal Quality Index (SQI) for HR is also developed, providing insights into where a signal is recoverable and where it is not, allowing for more confidence in the analysis performed on naturalistic recordings. The tools developed and reported in this paper provide a base for the future analysis of infant ECGs and related biophysical characteristics. Of particular importance, the proposed solutions outlined here can be efficiently applied to real-world, large datasets.
]]>Signals doi: 10.3390/signals5010006
Authors: Gang Jing Pedro Marin Montanari Giuseppe Lacidogna
Predicting rock bursts is essential for maintaining worker safety and the long-term growth of subsurface infrastructure. The purpose of this study is to investigate the precursor reactions and processes of rock instability. To determine the degree of rock damage, the research examines the time-varying acoustic emission (AE) features that occur when rocks are compressed uniaxially and introduces AE parameters such as the b-value, γ-value, and βt-value. The findings suggest that the evolution of rock damage during loading is adequately reflected by the b-value, γ-value, and βt-value. The relationships between b-value, γ-value, and βt-value are studied, as well as the possibility of using these three metrics as early-warning systems for rock failure.
]]>Signals doi: 10.3390/signals5010005
Authors: Elven Kee Jun Jie Chong Zi Jie Choong Michael Lau
Pick-and-place operations are an integral part of robotic automation and smart manufacturing. By utilizing deep learning techniques on resource-constraint embedded devices, the pick-and-place operations can be made more accurate, efficient, and sustainable, compared to the high-powered computer solution. In this study, we propose a new technique for object detection on an embedded system using SSD Mobilenet V2 FPN Lite with the optimisation of the hyperparameter and image enhancement. By increasing the Red Green Blue (RGB) saturation level of the images, we gain a 7% increase in mean Average Precision (mAP) when compared to the control group and a 20% increase in mAP when compared to the COCO 2017 validation dataset. Using a Learning Rate of 0.08 with an Edge Tensor Processing Unit (TPU), we obtain high real-time detection scores of 97%. The high detection scores are important to the control algorithm, which uses the bounding box to send a signal to the collaborative robot for pick-and-place operation.
]]>Signals doi: 10.3390/signals5010004
Authors: Ioannis Christakis Elena Sarri Odysseas Tsakiridis Ilias Stavrakas
Air quality is a subject of study, particularly in densely populated areas, as it has been shown to affect human health and the local ecosystem. In recent years, with the rapid development of technology, low-cost sensors have emerged, with many people interested in the quality of the air in their area turning to the procurement of such sensors as they are affordable. The reliability of measurements from low-cost sensors remains a question in the research community. In this paper, the determination of the correction factor of low-cost sensor measurements by applying the least absolute shrinkage and selection operator (LASSO) regression method is investigated. The results are promising, as following the application of the correction factor determined through LASSO regression the adjusted measurements exhibit a closer alignment with the reference measurements. This approach ensures that the measurements from low-cost sensors become more reliable and trustworthy.
]]>Signals doi: 10.3390/signals5010003
Authors: Changjiang He David S. Leslie James A. Grant
We consider the challenge of detecting and clustering point and collective anomalies in streaming data that exhibit significant nonlinearities and seasonal structures. The challenge is motivated by detecting problems in a communications network, where we can measure the throughput of nodes, and wish to rapidly detect anomalous traffic behaviour. Our approach is to train a neural network-based nonlinear autoregressive exogenous model on initial training data, then to use the sequential collective and point anomaly framework to identify anomalies in the residuals generated by comparing one-step-ahead predictions of the fitted model with the observations, and finally, we cluster the detected anomalies with fuzzy c-means clustering using empirical cumulative distribution functions. The autoregressive model is sufficiently general and robust such that it provides the nearly (locally) stationary residuals required by the anomaly detection procedure. The combined methods are successfully implemented to create an adaptive, robust, computational framework that can be used to cluster point and collective anomalies in streaming data. We validate the method on both data from the core of the UK’s national communications network and the multivariate Skoltech anomaly benchmark and find that the proposed method succeeds in dealing with different forms of anomalies within the nonlinear signals and outperforms conventional methods for anomaly detection and clustering.
]]>Signals doi: 10.3390/signals5010002
Authors: Daniel Klee Tab Memmott Barry Oken
Brain responses to discrete stimuli are modulated when multiple stimuli are presented in sequence. These alterations are especially pronounced when the time course of an evoked response overlaps with responses to subsequent stimuli, such as in a rapid serial visual presentation (RSVP) paradigm used to control a brain–computer interface (BCI). The present study explored whether the measurement or classification of select brain responses during RSVP would improve through application of an established technique for dealing with overlapping stimulus presentations, known as irregular or “jittered” stimulus onset interval (SOI). EEG data were collected from 24 healthy adult participants across multiple rounds of RSVP calibration and copy phrase tasks with varying degrees of SOI jitter. Analyses measured three separate brain signals sensitive to attention: N200, P300, and occipitoparietal alpha attenuation. Presentation jitter visibly reduced intrusion of the SSVEP, but in general, it did not positively or negatively affect attention effects, classification, or system performance. Though it remains unclear whether stimulus overlap is detrimental to BCI performance overall, the present study demonstrates that single-trial classification approaches may be resilient to rhythmic intrusions like SSVEP that appear in the averaged EEG.
]]>Signals doi: 10.3390/signals5010001
Authors: Zhenghan Zhu
Banding the inverse of a covariance matrix has become a popular technique for estimating a covariance matrix from a limited number of samples. It is of interest to provide criteria to determine if a matrix is bandable, as well as to test the bandedness of a matrix. In this paper, we pose the bandedness testing problem as a hypothesis testing task in statistical signal processing. We then derive two detectors, namely the complex Rao test and the complex Wald test, to test the bandedness of a Cholesky-factor matrix of a covariance matrix’s inverse. Furthermore, in many signal processing fields, such as radar and communications, the covariance matrix and its parameters are often complex-valued; thus, it is of interest to focus on complex-valued cases. The first detector is based on the complex parameter Rao test theorem. It does not require the maximum likelihood estimates of unknown parameters under the alternative hypothesis. We also develop the complex parameter Wald test theorem for general cases and derive the complex Wald test statistic for the bandedness testing problem. Numerical examples and computer simulations are given to evaluate and compare the two detectors’ performance. In addition, we show that the two detectors and the generalized likelihood ratio test are equivalent for the important complex Gaussian linear models and provide an analysis of the root cause of the equivalence.
]]>Signals doi: 10.3390/signals4040048
Authors: Jacopo Piccini Elias August Sami Leon Noel Aziz Hanna Tiina Siilak Erna Sif Arnardóttir
Currently, there is significant interest in developing algorithms for processing electrodermal activity (EDA) signals recorded during sleep. The interest is driven by the growing popularity and increased accuracy of wearable devices capable of recording EDA signals. If properly processed and analysed, they can be used for various purposes, such as identifying sleep stages and sleep-disordered breathing, while being minimally intrusive. Due to the tedious nature of manually scoring EDA sleep signals, the development of an algorithm to automate scoring is necessary. In this paper, we present a novel scoring algorithm for the detection of EDA events and EDA storms using signal processing techniques. We apply the algorithm to EDA recordings from two different and unrelated studies that have also been manually scored and evaluate its performances in terms of precision, recall, and F1 score. We obtain F1 scores of about 69% for EDA events and of about 56% for EDA storms. In comparison to the literature values for scoring agreement between experts, we observe a strong agreement between automatic and manual scoring of EDA events and a moderate agreement between automatic and manual scoring of EDA storms. EDA events and EDA storms detected with the algorithm can be further processed and used as training variables in machine learning algorithms to classify sleep health.
]]>Signals doi: 10.3390/signals4040047
Authors: Domonkos Varga
The rapid growth in multimedia, storage systems, and digital computers has resulted in huge repositories of multimedia content and large image datasets in recent years. For instance, biometric databases, which can be used to identify individuals based on fingerprints, facial features, or iris patterns, have gained a lot of attention both from academia and industry. Specifically, face image quality assessment (FIQA) has become a very important part of face recognition systems, since the performance of such systems strongly depends on the quality of input data, such as blur, focus, compression, pose, or illumination. The main contribution of this paper is an analysis of Benford’s law-inspired first digit distribution and perceptual features for FIQA. To be more specific, I investigate the first digit distributions in different domains, such as wavelet or singular values, as quality-aware features for FIQA. My analysis revealed that first digit distributions with perceptual features are able to reach a high performance in the task of FIQA.
]]>Signals doi: 10.3390/signals4040046
Authors: Mustafa Khudhair Nenad Gucunski
Several factors impact the durability of concrete bridge decks, including traffic loads, fatigue, temperature changes, environmental stress, and maintenance activities. Detecting problems such as corrosion, delamination, or concrete degradation early on can lower maintenance costs. Nondestructive evaluation (NDE) techniques can detect these issues at early stages. Each NDE method, meanwhile, has limitations that reduce the accuracy of the assessment. In this study, multiple NDE technologies were combined with machine learning algorithms to improve the interpretation of half-cell potential (HCP) and electrical resistivity (ER) measurements. A parametric study was performed to analyze the influence of five parameters on HCP and ER measurements, such as the degree of saturation, corrosion length, delamination depth, concrete cover, and moisture condition of delamination. The results were obtained through finite element simulations and used to build two machine learning algorithms, a classification algorithm and a regression algorithm, based on Random Forest methodology. The algorithms were tested using data collected from a bridge deck in the BEAST® facility. Both machine learning algorithms were effective in improving the interpretation of the ER and HCP measurements using data from multiple NDE technologies.
]]>Signals doi: 10.3390/signals4040045
Authors: Yedukondala Rao Veeranki Riley McNaboe Hugo F. Posada-Quintero
Epilepsy is a complex neurological disorder characterized by recurrent and unpredictable seizures that affect millions of people around the world. Early and accurate epilepsy detection is critical for timely medical intervention and improved patient outcomes. Several methods and classifiers for automated epilepsy detection have been developed in previous research. However, the existing research landscape requires innovative approaches that can further improve the accuracy of diagnosing and managing patients. This study investigates the application of variable-frequency complex demodulation (VFCDM) and convolutional neural networks (CNN) to discriminate between healthy, interictal, and ictal states using electroencephalogram (EEG) data. For testing this approach, the EEG signals were collected from the publicly available Bonn dataset. A high-resolution time–frequency spectrum (TFS) of each EEG signal was obtained using the VFCDM. The TFS images were fed to the CNN classifier for the classification of the signals. The performance of CNN was evaluated using leave-one-subject-out cross-validation (LOSO CV). The TFS shows variations in its frequency for different states that correspond to variation in the neural activity. The LOSO CV approach yields a consistently high performance, ranging from 90% to 99% between different combinations of healthy and epilepsy states (interictal and ictal). The extensive LOSO CV validation approach ensures the reliability and robustness of the proposed method. As a result, the research contributes to advancing the field of epilepsy detection and brings us one step closer to developing practical, reliable, and efficient diagnostic tools for clinical applications.
]]>Signals doi: 10.3390/signals4040044
Authors: Abdussalam Omar Dale Shpak Panajotis Agathoklis
This paper presents a new method for the design of separable-denominator 2-D IIR filters with nearly linear phase in the passband. The design method is based on a balanced realization model reduction technique. The nearly linear-phase 2-D IIR filter is designed using 2-D model reduction from a linear-phase 2-D FIR filter, which serves as the initial filter. The structured controllability and observability Gramians Ps and Qs serve as the foundation for this technique. onal positive-definite matrices that satisfy 2-D Lyapunov equations. An efficient method is used to compute these Gramians by minimizing the traces of Ps and Qs under linear matrix inequality (LMI) constraints. The use of these Gramians ensures that the resulting 2-D IIR filter preserves stability and can be implemented using a separable-denominator 2-D filter with fewer coefficients than the original 2-D FIR filter. Numerical examples show that the proposed method compares favorably with existing techniques.
]]>Signals doi: 10.3390/signals4040043
Authors: Jonathan Mayer Rejath Jose Gregory Kurgansky Paramvir Singh Chris Coletti Timothy Devine Milan Toma
In the field of modern healthcare, technology plays a crucial role in improving patient care and ensuring their safety. One area where advancements can still be made is in alert systems, which provide timely notifications to hospital staff about critical events involving patients. These early warning systems allow for swift responses and appropriate interventions when needed. A commonly used patient alert technology is nurse call systems, which empower patients to request assistance using bedside devices. Over time, these systems have evolved to include features such as call prioritization, integration with staff communication tools, and links to patient monitoring setups that can generate alerts based on vital signs. There is currently a shortage of smart systems that use sensors to inform healthcare workers about the activity levels of patients who are confined to their beds. Current systems mainly focus on alerting staff when patients become disconnected from monitoring machines. In this technical note, we discuss the potential of utilizing cost-effective sensors to monitor and evaluate typical movements made by hospitalized bed-bound patients. To improve the care provided to unaware patients further, healthcare professionals could benefit from implementing trigger alert systems that are based on detecting patient movements. Such systems would promptly notify mobile devices or nursing stations whenever a patient displays restlessness or leaves their bed urgently and requires medical attention.
]]>Signals doi: 10.3390/signals4040042
Authors: Mickaël Zehren Marco Alunno Paolo Bientinesi
Within the broad problem known as automatic music transcription, we considered the specific task of automatic drum transcription (ADT). This is a complex task that has recently shown significant advances thanks to deep learning (DL) techniques. Most notably, massive amounts of labeled data obtained from crowds of annotators have made it possible to implement large-scale supervised learning architectures for ADT. In this study, we explored the untapped potential of these new datasets by addressing three key points: First, we reviewed recent trends in DL architectures and focused on two techniques, self-attention mechanisms and tatum-synchronous convolutions. Then, to mitigate the noise and bias that are inherent in crowdsourced data, we extended the training data with additional annotations. Finally, to quantify the potential of the data, we compared many training scenarios by combining up to six different datasets, including zero-shot evaluations. Our findings revealed that crowdsourced datasets outperform previously utilized datasets, and regardless of the DL architecture employed, they are sufficient in size and quality to train accurate models. By fully exploiting this data source, our models produced high-quality drum transcriptions, achieving state-of-the-art results. Thanks to this accuracy, our work can be more successfully used by musicians (e.g., to learn new musical pieces by reading, or to convert their performances to MIDI) and researchers in music information retrieval (e.g., to retrieve information from the notes instead of audio, such as the rhythm or structure of a piece).
]]>Signals doi: 10.3390/signals4040041
Authors: Yousuf Al-Aali Mounir T. Hamood Said Boussakta
This paper introduces a new derivation of the radix-22 fast algorithm for the forward odd new Mersenne number transform (ONMNT) and the inverse odd new Mersenne number transform (IONMNT). This involves introducing new equations and functions in finite fields, bringing particular challenges unlike those in other fields. The radix-22 algorithm combines the benefits of the reduced number of operations of the radix-4 algorithm and the simple butterfly structure of the radix-2 algorithm, making it suitable for various applications such as lightweight ciphers, authenticated encryption, hash functions, signal processing, and convolution calculations. The multidimensional linear index mapping technique is the conventional method used to derive the radix-22 algorithm. However, this method does not provide clear insights into the underlying structure and flexibility of the radix-22 approach. This paper addresses this limitation and proposes a derivation based on bit-unscrambling techniques, which reverse the ordering of the output sequence, resulting in efficient calculations with fewer operations. Butterfly and signal flow diagrams are also presented to illustrate the structure of the fast algorithm for both ONMNT and IONMNT. The proposed method should pave the way for efficient and flexible implementation of ONMNT and IONMNT in applications such as lightweight ciphers and signal processing. The algorithm has been implemented in C and is validated with an example.
]]>Signals doi: 10.3390/signals4040040
Authors: Stathis Hadjidemetriou Ansgar Malich Lorenz Damian Rossknecht Luca Ferrarini Ismini E. Papageorgiou
The reconstruction in MRI assumes a uniform radio-frequency field. However, this is violated due to coil field nonuniformity and sensitivity variations. In whole-body MRI, the nonuniformities are more complex due to the imaging with multiple coils that typically have different overall sensitivities that result in sharp sensitivity changes at the junctions between adjacent coils. These lead to images with anatomically inconsequential intensity nonuniformities that include jump discontinuities of the intensity nonuniformities at the junctions corresponding to adjacent coils. The body is also imaged with multiple contrasts that result in images with different nonuniformities. A method is presented for the joint intensity uniformity restoration of two such images to achieve intensity homogenization. The effect of the spatial intensity distortion on the auto-co-occurrence statistics of each image as well as on the joint-co-occurrence statistics of the two images is modeled in terms of Point Spread Function (PSF). The PSFs and the non-stationary deconvolution of these PSFs from the statistics offer posterior Bayesian expectation estimates of the nonuniformity with Bayesian coring. Subsequently, a piecewise smoothness constraint is imposed for nonuniformity. This uses non-isotropic smoothing of the restoration field to allow the modeling of junction discontinuities. The implementation of the restoration method is iterative and imposes stability and validity constraints of the nonuniformity estimates. The effectiveness and accuracy of the method is demonstrated extensively with whole-body MRI image pairs of thirty-one cancer patients.
]]>Signals doi: 10.3390/signals4040039
Authors: Marco Ivaldi Lorenzo Giacometti David Conversi
In this study, the alpha and beta spectral frequency bands and amplitudes of EEG signals recorded from 10 healthy volunteers using an experimental cap with neoprene jacketed electrodes were analysed. Background: One of the main limitations in the analysis of EEG signals during movement is the presence of artefacts due to cranial muscle contraction; the objectives of this study therefore focused on two main aspects: (1) validating a tool capable of decreasing movement artefacts, while developing a reliable method for the quantitative analysis of EEG data; (2) using this method to analyse the EEG signal recorded during a particular motor activity (bi- and monopodalic postural control). Methods: The EEG sampling frequency was 512 Hz; the signal was acquired on 16 channels with monopolar montage and the reference on Cz. The recorded signals were processed using a specifically written Matlab script and also by exploiting open-source software (Eeglab). Results: The procedure used showed excellent reliability, allowing for a significant decrease in movement artefacts even during motor tasks performed both with eyes open and with eyes closed. Conclusions: This preliminary study lays the foundation for correctly recording EEG signals as an additional source of information in the study of human movement.
]]>Signals doi: 10.3390/signals4040038
Authors: Razi Hamada Ievgeniia Kuzminykh
IP cameras and digital video recorders, as part of the Internet of Surveillance Things (IoST) technology, can sometimes allow unauthenticated access to the video feed or management dashboard. These vulnerabilities may result from weak APIs, misconfigurations, or hidden firmware backdoors. What is particularly concerning is that these vulnerabilities can stay unnoticed for extended periods, spanning weeks, months, or even years, until a malicious attacker decides to exploit them. The response actions in case of identifying the vulnerability, such as updating software and firmware for millions of IoST devices, might be challenging and time-consuming. Implementing an air-gapped video surveillance network, which is isolated from the internet and external access, can reduce the cybersecurity threats associated with internet-connected IoST devices. However, such networks can also be susceptible to other threats and attacks, which need to be explored and analyzed. In this work, we perform a systematic literature review on the current state of research and use cases related to compromising and protecting cameras in logical and physical air-gapped networks. We provide a network diagram for each mode of exploitation, discuss the vulnerabilities that could result in a successful attack, demonstrate the potential impacts on organizations in the event of IoST compromise, and outline the security measures and mechanisms that can be deployed to mitigate these security risks.
]]>Signals doi: 10.3390/signals4040037
Authors: Ryo Matsuoka Masahiro Okuda
In this paper, we propose robust image-smoothing methods based on ℓ0 gradient minimization with novel gradient constraints to effectively suppress pseudo-edges. Simultaneously minimizing the ℓ0 gradient, i.e., the number of nonzero gradients in an image, and the ℓ2 data fidelity results in a smooth image. However, this optimization often leads to undesirable artifacts, such as pseudo-edges, known as the “staircasing effect”, and halos, which become more visible in image enhancement tasks, like detail enhancement and tone mapping. To address these issues, we introduce two types of gradient constraints: box and ball. These constraints are applied using a reference image (e.g., the input image is used as a reference for image smoothing) to suppress pseudo-edges in homogeneous regions and the blurring effect around strong edges. We also present an ℓ0 gradient minimization problem based on the box-/ball-type gradient constraints using an alternating direction method of multipliers (ADMM). Experimental results on important applications of ℓ0 gradient minimization demonstrate the advantages of our proposed methods compared to existing ℓ0 gradient-based approaches.
]]>Signals doi: 10.3390/signals4040036
Authors: Madduma Wellalage Pasan Maduranga Valmik Tilwari Ruvan Abeysekera
Recent developments in machine learning algorithms are playing a significant role in wireless communication and Internet of Things (IoT) systems. Location-based Internet of Things services (LBIoTS) are considered one of the primary applications among those IoT applications. The key information involved in LBIoTS is finding an object’s geographical location. The Global Positioning System (GPS) technique does not perform better in indoor environments due to multipath. Numerous methods have been investigated for indoor localization scenarios. However, the precise location estimation of a moving object in such an application is challenging due to the high signal fluctuations. Therefore, this paper presents machine learning algorithms to estimate the object’s location based on the Received Signal Strength Indicator (RSSI) values collected through Bluetooth low-energy (BLE)-based nodes. In this experiment, we utilize a publicly available RSSI dataset. The RSSI data are collected from different BLE ibeacon nodes installed in a complex indoor environment with labels. Then, the RSSI data are linearized using the weighted least-squares method and filtered using moving average filters. Moreover, machine learning algorithms are used for training and testing the dataset to estimate the precise location of the objects. All the proposed algorithms were tested and evaluated under their different hyperparameters. The tested models provided approximately 85% accuracy for KNN, 84% for SVM and 76% accuracy in FFNN.
]]>Signals doi: 10.3390/signals4040035
Authors: Amir Siraj Abraham Loeb
The first meter-scale interstellar meteor (IM1) was detected by US government sensors in 2014, identified as an interstellar object candidate in 2019, and confirmed by the Department of Defense (DoD) in 2022. We use data from a nearby seismometer to localize the fireball to a ∼16km2 region within the ∼120km2 zone allowed by the precision of the DoD-provided coordinates. The improved localization is of great importance for a forthcoming expedition to retrieve the meteor fragments.
]]>Signals doi: 10.3390/signals4030034
Authors: Hamid Abbasi Malcolm R. Battin Robyn Butler Deborah Rowe Benjamin A. Lear Alistair J. Gunn Laura Bennet
Reliable prognostic biomarkers are needed to support the early diagnosis of brain injury in extremely preterm infants, and to develop effective neuroprotective protocols that are tailored to the progressing phases of injury. Experimental and clinical research shows that severity of neuronal damage is correlated with changes in the electroencephalogram (EEG) after hypoxic-ischemia (HI). We have previously reported that micro-scale sharp-wave EEG waveforms have prognostic utility within the early hours of post-HI recordings in preterm fetal sheep, before injury develops. This article aims to investigate whether these subtle EEG patterns are translational in the early hours of life in clinical recordings from extremely preterm newborns. This work evaluates the existence and morphological similarity of the sharp-waves automatically identified throughout the entire duration of EEG data from a cohort of fetal sheep 6 h after HI (n = 7, at 103 ± 1 day gestation) and in recordings commencing before 6 h of life in extremely preterm neonates (n = 7, 27 ± 2.0 weeks gestation). We report that micro-scale EEG waveforms with similar morphology and characteristics (r = 0.94) to those seen in fetal sheep after HI are also present after birth in recordings started before 6 h of life in extremely preterm neonates. This work further indicates that the post-HI sharp-waves show rapid morphological evolution, influenced by age and/or severity of neuronal loss, and thus that automated algorithms should be validated against such signal variations. Finally, this article discusses the need for more focused research on the early assessment of EEG changes in preterm infants to help determine the timing of brain injury to identify biomarkers that could assist in targeting novel therapies for particular phases of injury.
]]>Signals doi: 10.3390/signals4030033
Authors: Shalin Ye Shufan Wu
This paper addresses the problem of designing an adaptive Kalman consensus filter (a-KCF) which embedded in multiple mobile agents that are distributed in a 2D domain. The role of such filters is to provide adaptive estimation of the states of a dynamic linear system through communication over a wireless sensor network. It is assumed that each sensing device (embedded in each agent) provides partial state measurements and transmits the information to its instant neighbors in the communication topology. An adaptive consensus algorithm is then adopted to enforce the agreement on the state estimates among all connected agents. The basis of a-KCF design is derived from the classic Kalman filtering theorem; the adaptation of the consensus gain for each local filter in the disagreement terms improves the convergence of the associated difference between the estimation and the actual states of the dynamic linear system, reducing it to zero with appropriate norms. Simulation results testing the performance of a-KCF confirm the validation of our design.
]]>Signals doi: 10.3390/signals4030032
Authors: Charisios Achillas Dimitrios Aidonis Naoum Tsolakis Ioannis Tsampoulatidis Alexandros Mourouzis Christos Bialas Kyriakos Koritsoglou
In recent years, accessibility has become a topic of great interest on a global scale across the scientific, business, and policy sectors. There are two primary reasons for this growing trend. Firstly, accessibility serves as a vital indicator reflecting the social performance of communities, and the public is increasingly aware of critical social issues such as accessibility. Secondly, accessibility is essential for the sustainable development of regions and civil settings, facilitating inclusion and business growth. In this regard, information and communications technologies can play a crucial role in facilitating the accessibility of spaces by disabled people. Numerous digital tools and smart applications are already available to serve this purpose. This study presents a novel digital tool called ‘Seek & Go’, a comprehensive aid application designed specifically for disabled individuals. The app features a GPS navigation system that caters to pedestrians with disabilities and unique accessibility requirements. The present study documents the models underlying the development of ‘Seek & Go’, discusses technical aspects of the application, and presents user experience insights.
]]>Signals doi: 10.3390/signals4030031
Authors: Biyun Ma Diyuan Xu Xinyu Ren Yide Wang Jiaojiao Liu
To balance the information security and energy harvest for massive internet-of-things (IoT) devices, an unmanned aerial vehicle (UAV)–assisted secure communication model is proposed in this paper. We extend the secure transmission model with physical layer security (PLS) to simultaneous wireless information and power transfer (SWIPT) technology and optimize the UAV trajectory, transmission power, and power splitting ratio (PSR). The nonconvex object function is decomposed into three subproblems. Then a robust iterative suboptimal algorithm based on the block coordinate descent (BCD) method is proposed to solve the subproblems. Numerical simulation results are provided to show the effectiveness of the proposed method. These results clearly illustrate that our resource allocation schemes surpass baseline schemes in terms of both transmit power and ratio of harvesting energy, while maintaining an approximately instantaneous secrecy rate.
]]>Signals doi: 10.3390/signals4030030
Authors: Abdussalam Omar Dale Shpak Panajotis Agathoklis Belaid Moa
In this paper, a new optimization method for the design of nearly linear-phase two-dimensional infinite impulse (2D IIR) digital filters with a separable denominator is proposed. A design framework for 2D IIR digital filters is formulated as a nonlinear constrained optimization problem where the group delay deviation in the passband is minimized under prescribed soft magnitude constraints and hard stability requirements. To achieve this goal, sub-level sets of the group delay deviations are utilized to generate a sequence of filters, from which the one with the best performance is selected. The quality of the obtained filter is evaluated using three quality factors, namely, the passband magnitude quality factor Qh and the group delay deviation quality factor Qτ, while the third one is a new quality factor Qs that assesses the performance in the stopband relative to the minimum filter gain in the passband. The proposed framework is implemented using the interior-point (IP) method in a MATLAB environment, and the experimental results show that filters designed using the proposed method have good magnitude response and low group delay deviation. The performance of the resulting filters is compared with the results of other methods.
]]>Signals doi: 10.3390/signals4030029
Authors: Claudia Ferraris Gianluca Amprimo Giuseppe Pettiti
Structural deterioration is a primary long-term concern resulting from material wear and tear, events, solicitations, and disasters that can progressively compromise the integrity of a cement-based structure until it suddenly collapses, becoming a potential and latent danger to the public. For many years, manual visual inspection has been the only viable structural health monitoring (SHM) solution. Technological advances have led to the development of sensors and devices suitable for the early detection of changes in structures and materials using automated or semi-automated approaches. Recently, solutions based on computer vision, imaging, and video signal analysis have gained momentum in SHM due to increased processing and storage performance, the ability to easily monitor inaccessible areas (e.g., through drones and robots), and recent progress in artificial intelligence fueling automated recognition and classification processes. This paper summarizes the most recent studies (2018–2022) that have proposed solutions for the SHM of infrastructures based on optical devices, computer vision, and image processing approaches. The preliminary analysis revealed an initial subdivision into two macro-categories: studies that implemented vision systems and studies that accessed image datasets. Each study was then analyzed in more detail to present a qualitative description related to the target structures, type of monitoring, instrumentation and data source, methodological approach, and main results, thus providing a more comprehensive overview of the recent applications in SHM and facilitating comparisons between the studies.
]]>Signals doi: 10.3390/signals4030028
Authors: Loris Nanni Giovanni Faldani Sheryl Brahnam Riccardo Bravin Elia Feltrin
This paper presents a study of an automated system for identifying planktic foraminifera at the species level. The system uses a combination of deep learning methods, specifically convolutional neural networks (CNNs), to analyze digital images of foraminifera taken at different illumination angles. The dataset is composed of 1437 groups of sixteen grayscale images, one group for each foraminifera specimen, that are then converted to RGB images with various processing methods. These RGB images are fed into a set of CNNs, organized in an ensemble learning (EL) environment. The ensemble is built by training different networks using different approaches for creating the RGB images. The study finds that an ensemble of CNN models trained on different RGB images improves the system’s performance compared to other state-of-the-art approaches. The main focus of this paper is to introduce multiple colorization methods that differ from the current cutting-edge techniques; novel strategies like Gaussian or mean-based techniques are suggested. The proposed system was also found to outperform human experts in classification accuracy.
]]>Signals doi: 10.3390/signals4030027
Authors: Aubrey N. Beal
We present an algorithm for extracting basis functions from the chaotic Lorenz system along with timing and bit-sequence statistics. Previous work focused on modifying Lorenz waveforms and extracting the basis function of a single state variable. Importantly, these efforts initiated the development of solvable chaotic systems with simple matched filters, which are suitable for many spread spectrum applications. However, few solvable chaotic systems are known, and they are highly dependent upon an engineered basis function. Non-solvable, Lorenz signals are often used to test time-series prediction schemes and are also central to efforts to maximize spectral efficiency by joining radar and communication waveforms. Here, we provide extracted basis functions for all three Lorenz state variables, their timing statistics, and their bit-sequence statistics. Further, we outline a detailed algorithm suitable for the extraction of basis functions from many chaotic systems such as the Lorenz system. These results promote the search for engineered basis functions in solvable chaotic systems, provide tools for joining radar and communication waveforms, and give an algorithmic process for modifying chaotic Lorenz waveforms to quantify the performance of chaotic time-series forecasting methods. The results presented here provide engineered test signals compatible with quantitative analysis of predicted amplitudes and regular timing.
]]>Signals doi: 10.3390/signals4030026
Authors: Eduardo Arrufat-Pié Mario Estévez-Báez José Mario Estévez-Carreras Gerry Leisman Calixto Machado Carlos Beltrán-León
This study investigates the use of empirical mode decomposition (EMD) to extract intrinsic mode functions (IMFs) for the spectral analysis of EEG signals in healthy individuals and its possible biological interpretations. Unlike traditional EEG analysis, this approach does not require the establishment of arbitrary band limits. The study uses a multivariate EMD algorithm (APIT-MEMD) to extract IMFs from the EEG signals of 34 healthy volunteers. The first six IMFs are analyzed using two different methods, based on FFT and HHT, and the results compared using the ANOVA test and the Bland–Altman method for agreement test. The outcomes show that the frequency values of the first six IMFs fall within the range of classic EEG bands (1.72–52.4 Hz). Although there was a lack of agreement in the mean weighted frequency values of the first three IMFs between the two methods (>3 Hz), both methods showed similar results for power spectral density (<5% normalized units, %, of power spectral density). The HHT method is found to have better frequency resolution than APIT-MEMD associated with FTT that produce less overlapping between IMF3 and 4 (p = 0.0046) and it is recommended for analyzing the spectral properties of IMFs. The study concludes that the HHT method could help to avoid the assumption of strict frequency band limits, and that the potential impact of EEG physiological phenomenon on mode-mixing interpretation, particularly for the alpha and theta ranges, must be considered in future research.
]]>Signals doi: 10.3390/signals4030025
Authors: Orlando Camargo Rodríguez Lilun Zhang Xinghua Cheng
Experimental data from the SACLANTCEN 1993 Mediterranean Experiment are reviewed to assess the reduction of the search space for the localization and tracking of an acoustic source in a three-dimensional environment. Key to this goal is the availability of an initial estimate of source range and depth (called the 2D initial guess); an ambiguous estimate of source bearing can be obtained from the 2D initial guess through Environmental Signal Processing, and the ambiguity can be removed by searching for the source only in the range/bearing regions where bearing estimates are higher. This search provides a new estimate of source range and a single bearing, which together with the estimate for source depth constitute the center of the reduced search space for source localization and tracking. The suggested approach is tested on experimental data from the SACLANTCEN experiment considering different frequencies, as well as a stationary and a moving source.
]]>Signals doi: 10.3390/signals4020024
Authors: M. A. Vieira M. Vieira P. Louro P. Vieira A. Fantoni
An innovative treatment for congested urban road networks is the split intersection. Here, a congested two-way–two-way traffic light-controlled intersection is transformed into two lighter intersections. By reducing conflict points and improving travel time, it facilitates smoother flow with less driver delay. We propose a visible light communication system based on Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I) and Infrastructure-to-Vehicle (I2V) communications able to safely manage vehicles crossing through an intersection, leveraging Edge of Things (EoT) facilities. Headlights, street lamps, and traffic signals are used by connected vehicles to communicate with one another and with infrastructure. Through internally installed Driver Agents, an Intersection Manager coordinates traffic flow and interacts with vehicles. For the safe passage of vehicles across intersections, request/response mechanisms and time and space relative pose concepts are used. A virtual scenario is proposed, and a “mesh/cellular” hybrid architecture used. Light signals are emitted by transmitters by encoding, modulating, and converting data. Optical sensors with light-filtering properties are used as receivers and decoders. The VLC request/response concept uplink and downlink communication between the infrastructure and the vehicles is tested. Based on the results, the short-range mesh network provides a secure communication path between street lamp controllers and edge computers through neighbor traffic light controllers that have active cellular connections, as well as peer-to-peer communication, allowing V-VLC ready cars to exchange information.
]]>Signals doi: 10.3390/signals4020023
Authors: Sharanya Srinivas Andrew Herschfelt Daniel W. Bliss
As radio frequency (RF) hardware continues to improve, two-way ranging (TWR) has become a viable approach for high-precision ranging applications. The precision of a TWR system is fundamentally limited by estimates of the time offset T between two platforms and the time delay τ of a signal propagating between them. In previous work, we derived a family of optimal “one-shot” joint delay–offset estimators and demonstrated that they reduce to a system of linear equations under reasonable assumptions. These estimators are simple and computationally efficient but are also susceptible to channel impairments that obstruct one or more measurements. In this work, we formulate an extended Kalman filter (EKF) for this class of estimators that specifically addresses this limitation. Unlike a generic KF approach, the proposed solution specifically integrates the estimation process to minimize the computational complexity. We benchmark the proposed first- and second-order EKF solutions against the existing one-shot estimators in a MATLAB Monte Carlo simulation environment. We demonstrate that the proposed solution achieves comparable estimation performance and, in the case of the second-order solution, reduces the computation time by an order of magnitude.
]]>Signals doi: 10.3390/signals4020022
Authors: Dionysios Anyfantis Athanasios Koutras George Apostolopoulos Ioanna Christoyianni
Breast cancer is the most common cancer in women, a leading cause of morbidity and mortality, and a significant health issue worldwide. According to the World Health Organization’s cancer awareness recommendations, mammographic screening should be regularly performed on middle-aged or older women to increase the chances of early cancer detection. Breast density is widely known to be related to the risk of cancer development. The American College of Radiology Breast Imaging Reporting and Data System categorizes mammography into four levels based on breast density, ranging from ACR-A (least dense) to ACR-D (most dense). Computer-aided diagnostic (CAD) systems can now detect suspicious regions in mammograms and identify abnormalities more quickly and accurately than human readers. However, their performance is still influenced by the tissue density level, which must be considered when designing such systems. In this paper, we propose a novel method that uses CycleGANs to transform suspicious regions of mammograms from ACR-B, -C, and -D levels to ACR-A level. This transformation aims to reduce the masking effect caused by thick tissue and separate cancerous regions from surrounding tissue. Our proposed system enhances the performance of conventional CNN-based classifiers significantly by focusing on regions of interest that would otherwise be misidentified due to fatty masking. Extensive testing on different types of mammograms (digital and scanned X-ray film) demonstrates the effectiveness of our system in identifying normal, benign, and malignant regions of interest.
]]>Signals doi: 10.3390/signals4020021
Authors: Eugenia I. Toki Giorgos Tatsis Vasileios A. Tatsis Konstantinos Plachouras Jenny Pange Ioannis G. Tsoulos
Early detection and evaluation of children at risk of neurodevelopmental disorders and/or communication deficits is critical. While the current literature indicates a high prevalence of neurodevelopmental disorders, many children remain undiagnosed, resulting in missed opportunities for effective interventions that could have had a greater impact if administered earlier. Clinicians face a variety of complications during neurodevelopmental disorders’ evaluation procedures and must elevate their use of digital tools to aid in early detection efficiently. Artificial intelligence enables novelty in taking decisions, classification, and diagnosis. The current research investigates the efficacy of various machine learning approaches on the biometric SmartSpeech datasets. These datasets come from a new innovative system that includes a serious game which gathers children’s responses to specifically designed speech and language activities and their manifestations, intending to assist during the clinical evaluation of neurodevelopmental disorders. The machine learning approaches were used by utilizing the algorithms Radial Basis Function, Neural Network, Deep Learning Neural Networks, and a variation of Grammatical Evolution (GenClass). The most significant results show improved accuracy (%) when using the eye tracking dataset; more specifically: (i) for the class Disorder with GenClass (92.83%), (ii) for the class Autism Spectrum Disorders with Deep Learning Neural Networks layer 4 (86.33%), (iii) for the class Attention Deficit Hyperactivity Disorder with Deep Learning Neural Networks layer 4 (87.44%), (iv) for the class Intellectual Disability with GenClass (86.93%), (v) for the class Specific Learning Disorder with GenClass (88.88%), and (vi) for the class Communication Disorders with GenClass (88.70%). Overall, the results indicated GenClass to be nearly the top competitor, opening up additional probes for future studies toward automatically classifying and assisting clinical assessments for children with neurodevelopmental disorders.
]]>Signals doi: 10.3390/signals4020020
Authors: Khadija Attouri Majdi Mansouri Mansour Hajji Abdelmalek Kouadri Kais Bouzrara Hazem Nounou
In this work, an effective Fault Detection and Diagnosis (FDD) strategy designed to increase the performance and accuracy of fault diagnosis in grid-connected photovoltaic (GCPV) systems is developed. The evolved approach is threefold: first, a pre-processing of the training dataset is applied using a multiscale scheme that decomposes the data at multiple scales using high-pass/low-pass filters to separate the noise from the informative attributes and prevent the stochastic samples. Second, a principal component analysis (PCA) technique is applied to the newly obtained data to select, extract, and preserve only the more relevant, informative, and uncorrelated attributes; and finally, to distinguish between the diverse conditions, the extracted attributes are utilized to train the NNs classifiers. In this study, an effort is made to take into consideration all potential and frequent faults that might occur in PV systems. Thus, twenty-one faulty scenarios (line-to-line, line-to-ground, connectivity faults, and faults that can affect the normal operation of the bay-pass diodes) have been introduced and treated at different levels and locations; each scenario comprises various and diverse conditions, including the occurrence of simple faults in the PV1 array, simple faults in the PV2 array, multiple faults in PV1, multiple faults in PV2, and mixed faults in both PV arrays, in order to ensure a complete and global analysis, thereby reducing the loss of generated energy and maintaining the reliability and efficiency of such systems. The obtained outcomes demonstrate that the proposed approach not only achieves good accuracies but also reduces runtimes during the diagnosis process by avoiding noisy and stochastic data, thereby removing irrelevant and correlated samples from the original dataset.
]]>Signals doi: 10.3390/signals4020019
Authors: Deeksha Adiani Kelley Colopietro Joshua Wade Miroslava Migovich Timothy J. Vogus Nilanjan Sarkar
Computer-based job interview training, including virtual reality (VR) simulations, have gained popularity in recent years to support and aid autistic individuals, who face significant challenges and barriers in finding and maintaining employment. Although popular, these training systems often fail to resemble the complexity and dynamism of the employment interview, as the dialogue management for the virtual conversation agent either relies on choosing from a menu of prespecified answers, or dialogue processing is based on keyword extraction from the transcribed speech of the interviewee, which depends on the interview script. We address this limitation through automated dialogue act classification via transfer learning. This allows for recognizing intent from user speech, independent of the domain of the interview. We also redress the lack of training data for a domain general job interview dialogue act classifier by providing an original dataset with responses to interview questions within a virtual job interview platform from 22 autistic participants. Participants’ responses to a customized interview script were transcribed to text and annotated according to a custom 13-class dialogue act scheme. The best classifier was a fine-tuned bidirectional encoder representations from transformers (BERT) model, with an f1-score of 87%.
]]>Signals doi: 10.3390/signals4020018
Authors: Ram M. Narayanan Bryan Tsang Ramesh Bharadwaj
This paper investigates the use of micro-Doppler spectrogram signatures of flying targets, such as drones and birds, to aid in their remote classification. Using a custom-designed 10-GHz continuous wave (CW) radar system, measurements from different scenarios on a variety of targets were recorded to create datasets for image classification. Time/velocity spectrograms generated for micro-Doppler analysis of multiple drones and birds were used for target identification and movement classification using TensorFlow. Using support vector machines (SVMs), the results showed an accuracy of about 90% for drone size classification, about 96% for drone vs. bird classification, and about 85% for individual drone and bird distinction between five classes. Different characteristics of target detection were explored, including the landscape and behavior of the target.
]]>Signals doi: 10.3390/signals4020017
Authors: Vasileios Cheimaras Nikolaos Peladarinos Nikolaos Monios Spyridon Daousis Spyridon Papagiakoumos Panagiotis Papageorgas Dimitrios Piromalis
Emergency Communication Systems (ECS) are network-based systems that may enable people to exchange information during crises and physical disasters when basic communication options have collapsed. They may be used to restore communication in off-grid areas or even when normal telecommunication networks have failed. These systems may use technologies such as Low-Power Wide-Area(LPWAN) and Software-Defined Wide Area Networks (SD-WAN), which can be specialized as software applications and Internet of Things (IoT) platforms. In this article, we present a comprehensive discussion of the existing ECS use cases and current research directions regarding the use of unconventional and hybrid methods for establishing communication between a specific site and the outside world. The ECS system proposed and simulated in this article consists of an autonomous wireless 4G/LTE base station and a LoRa network utilizing a hybrid IoT communication platform combining LPWAN and SD-WAN technologies. The LoRa-based wireless network was simulated using Network Simulator 3 (NS3), referring basically to firm and sufficient data transfer between an appropriate gateway and LP-WAN sensor nodes to provide trustworthy communications. The proposed scheme provided efficient data transfer posing low data losses by optimizing the installation of the gateway within the premises, while the SD-WAN scheme that was simulated using the MATLAB simulator and LTE Toolbox in conjunction with an ADALM PLUTO SDR device proved to be an outstanding alternative communication solution as well. Its performance was measured after recombining all received data blocks, leading to a beneficial proposal to researchers and practitioners regarding the benefits of using an on-premises IoT communication platform.
]]>Signals doi: 10.3390/signals4020016
Authors: Aníbal Chaves Fábio Mendonça Sheikh Shanawaz Mostafa Fernando Morgado-Dias
Through the development of artificial intelligence, some capabilities of human beings have been replicated in computers. Among the developed models, convolutional neural networks stand out considerably because they make it possible for systems to have the inherent capabilities of humans, such as pattern recognition in images and signals. However, conventional methods are based on deterministic models, which cannot express the epistemic uncertainty of their predictions. The alternative consists of probabilistic models, although these are considerably more difficult to develop. To address the problems related to the development of probabilistic networks and the choice of network architecture, this article proposes the development of an application that allows the user to choose the desired architecture with the trained model for the given data. This application, named “Graphical User Interface for Probabilistic Neural Networks”, allows the user to develop or to use a standard convolutional neural network for the provided data, with networks already adapted to implement a probabilistic model. Contrary to the existing models for generic use, which are deterministic and already pre-trained on databases to be used in transfer learning, the approach followed in this work creates the network layer by layer, with training performed on the provided data, originating a specific model for the data in question.
]]>Signals doi: 10.3390/signals4020015
Authors: Sujan Chandra Roy Nobuo Funabiki Md. Mahbubur Rahman Bin Wu Minoru Kuribayashi Wen-Chung Kao
Currently, Internet of Things (IoT) has become common in various applications, including smart factories, smart cities, and smart homes. In them, wireless local-area networks (WLANs) are widely used due to their high-speed data transfer, flexible coverage ranges, and low costs. To enhance the performance, the WLAN configuration should be optimized in dense WLAN environments where multiple access points (APs) and hosts exist. Previously, we have studied the active AP configuration algorithm for dual interfaces using IEEE802.11n and 11ac protocols at each AP under non-channel bonding (non-CB). In this paper, we study the algorithm considering the channel bonding (CB) to enhance its capacity by bonding two channels together. To improve the throughput estimation accuracy of the algorithm, the reduction factor is introduced at contending hosts for the same AP. For evaluations, we conducted extensive experiments using the WIMENT simulator and the testbed system using Raspberry Pi 4B APs. The results show that the estimated throughput is well matched with the measured one, and the proposal achieves the higher throughput with a smaller number of active APs than the previous configurations.
]]>Signals doi: 10.3390/signals4010014
Authors: Rahul Arun Paropkari Cory Beard
Mobile networks of the fifth generation have stringent requirements for data throughput, latency and reliability. Dual or multi-connectivity is implemented to meet the mobility requirements for certain essential 5G use cases, and this ensures the user’s connection to one or more radio links. Packet duplication (PD) over multi-connectivity is a method of compensating for lost packets by reducing re-transmissions on the same erroneous wireless channel. Utilizing two or more uncorrelated links, a high degree of availability can be attained with this strategy. However, complete packet duplication is inefficient and frequently unnecessary. The wireless channel conditions can change frequently and not allow for a PD. We provide a novel adaptive fractional packet duplication (A-FPD) mechanism for enabling and disabling packet duplication based on a variety of parameters. The signal-to-interference-plus-noise ratio (SINR) and fade duration outage probability (FDOP) are important performance indicators for wireless networks and are used to evaluate and contrast several packet duplication scenarios. Using ns-3 and MATLAB, we present our simulation results for the multi-connectivity and proposed A-FPD schemes. Our technique merely duplicates enough packets across multiple connections to meet the outage criteria.
]]>Signals doi: 10.3390/signals4010013
Authors: Harshini Gangapuram Vidya Manian
Multiclass motor imagery classification is essential for brain–computer interface systems such as prosthetic arms. The compressive sensing of EEG helps classify brain signals in real-time, which is necessary for a BCI system. However, compressive sensing is limited, despite its flexibility and data efficiency, because of its sparsity and high computational cost in reconstructing signals. Although the constraint of sparsity in compressive sensing has been addressed through neural networks, its signal reconstruction remains slow, and the computational cost increases to classify the signals further. Therefore, we propose a 1D-Convolutional Residual Network that classifies EEG features in the compressed (sparse) domain without reconstructing the signal. First, we extract only wavelet features (energy and entropy) from raw EEG epochs to construct a dictionary. Next, we classify the given test EEG data based on the sparse representation of the dictionary. The proposed method is computationally inexpensive, fast, and has high classification accuracy as it uses a single feature to classify without preprocessing. The proposed method is trained, validated, and tested using multiclass motor imagery data of 109 subjects from the PhysioNet database. The results demonstrate that the proposed method outperforms state-of-the-art classifiers with 96.6% accuracy.
]]>Signals doi: 10.3390/signals4010012
Authors: Stamatia F. Drampalou Nikolaos I. Miridakis Helen C. Leligou Panagiotis A. Karkazis
Next-generation wireless communications aim to utilize mmWave/subTHz bands. In this regime, signal propagation is vulnerable to interferences and path losses. To overcome this issue, a novel technology has been introduced, which is called reconfigurable intelligent surface (RIS). RISs control digitally the reflecting signals using many passive reflector arrays and implement a smart and modifiable radio environment for wireless communications. Nonetheless, channel estimation is the main problem of RIS-assisted systems because of their direct dependence on the system architecture design, the transmission channel configuration and methods used to compute channel state information (CSI) on a base station (BS) and RIS. In this paper, a concise survey on the up-to-date RIS-assisted wireless communications is provided and includes the massive multiple input-multiple output (mMIMO), multiple input-single output (MISO) and cell-free systems with an emphasis on effective algorithms computing CSI. In addition, we will present the effectiveness of the algorithms computing CSI for different communication systems and their techniques, and we will represent the most important ones.
]]>Signals doi: 10.3390/signals4010011
Authors: Santiago Marco
Being the new editor-in-chief of Signals is a great honour and a daunting task [...]
]]>Signals doi: 10.3390/signals4010010
Authors: Anika Alim Masudul H. Imtiaz
EEG (electroencephalogram) signals could be used reliably to extract critical information regarding ADHD (attention deficit hyperactivity disorder), a childhood neurodevelopmental disorder. The early detection of ADHD is important to lessen the development of this disorder and reduce its long-term impact. This study aimed to develop a computer algorithm to identify children with ADHD automatically from the characteristic brain waves. An EEG machine learning pipeline is presented here, including signal preprocessing and data preparation steps, with thorough explanations and rationale. A large public dataset of 120 children was selected, containing large variability and minimal measurement bias in data collection and reproducible child-friendly visual attentional tasks. Unlike other studies, EEG linear features were extracted to train a Gaussian SVM-based model from only the first four sub-bands of EEG. This eliminates signals more than 30 Hz, thus reducing the computational load for model training while keeping mean accuracy of ~94%. We also performed rigorous validation (obtained 93.2% and 94.2% accuracy, respectively, for holdout and 10-fold cross-validation) to ensure that the developed model is minimally impacted by bias and overfitting that commonly appear in the ML pipeline. These performance metrics indicate the ability to automatically identify children with ADHD from a local clinical setting and provide a baseline for further clinical evaluation and timely therapeutic attempts.
]]>Signals doi: 10.3390/signals4010009
Authors: Mahmoud Abdel-Latif Mohammad Reza Askari Mudassir M. Rashid Minsun Park Lisa Sharp Laurie Quinn Ali Cinar
Wearable sensor data can be integrated and interpreted to improve the treatment of chronic conditions, such as diabetes, by enabling adjustments in treatment decisions based on physical activity and psychological stress assessments. The challenges in using biological analytes to frequently detect physical activity (PA) and acute psychological stress (APS) in daily life necessitate the use of data from noninvasive sensors in wearable devices, such as wristbands. We developed a recurrent multi-task deep neural network (NN) with long-short-term-memory architecture to integrate data from multiple sensors (blood volume pulse, skin temperature, galvanic skin response, three-axis accelerometers) and simultaneously detect and classify the type of PA, namely, sedentary state, treadmill run, stationary bike, and APS, such as non-stress, emotional anxiety stress, mental stress, and estimate the energy expenditure (EE). The objective was to assess the feasibility of using the multi-task recurrent NN (RNN) rather than independent RNNs for detection and classification of AP and APS. The multi-task RNN achieves comparable performance to independent RNNs, with the multi-task RNN having F1 scores of 98.00% for PA and 98.97% for APS, and a root mean square error (RMSE) of 0.728 calhr.kg for EE estimation for testing data. The independent RNNs have F1 scores of 99.64% for PA and 98.83% for APS, and an RMSE of 0.666 calhr.kg for EE estimation. The results indicate that a multi-task RNN can effectively interpret the signals from wearable sensors. Additionally, we developed individual and multi-task extreme gradient boosting (XGBoost) for separate and simultaneous classification of PA types and APS types. Multi-task XGBoost achieved F1 scores of 99.89% and 98.31% for the classification of PA types and APS types, respectively, while the independent XGBoost achieved F1 scores of 99.68% and 96.77%, respectively. The results indicate that both multi-task RNN and XGBoost can be used for the detection and classification of PA and APS without loss of performance with respect to individual separate classification systems. People with diabetes can achieve better outcomes and quality of life by including physical activity and psychological stress assessments in treatment decision-making.
]]>Signals doi: 10.3390/signals4010008
Authors: Dimitrios Paraskevopoulos Christos Spandonidis Fotis Giannopoulos
Three-phase induction motors (IMs) are considered an essential part of electromechanical systems. Despite the fact that IMs operate efficiently under harsh environments, there are many cases where they indicate deterioration. A crucial type of fault that must be diagnosed early is stator winding faults as a consequence of short circuits. Motor current signature analysis is a promising method for the failure diagnosis of power systems. Wavelets are ideal for both time- and frequency-domain analyses of the electrical current of nonstationary signals. In this paper, the signal data are obtained from simulations of an induction motor for various stator winding fault conditions and one normal operating condition. Our main contribution is the presentation of a fault diagnostic system based on a hybrid discrete wavelet–CNN method. First, the time series of the currents are processed with discrete wavelet analysis. In this way, the harmonic frequencies of the faults are successfully captured, and features can be extracted that comprise valuable information. Next, the features are fed into a convolutional neural network (CNN) model that achieves competitive accuracy and needs significantly reduced training time. The motivations for integrating CNNs into wavelet analysis results for fault diagnosis are as follows: (1) the monitoring is automated, as no human operators are needed to examine the results; (2) deep learning algorithms have the potential to identify even more indistinguishable and complex faults than those that human eyes could.
]]>Signals doi: 10.3390/signals4010007
Authors: Vincent Nsed Ogar Sajjad Hussain Kelum A. A. Gamage
When a fault occurs on the transmission line, the relay should send the faulty signal to the circuit breaker to trip or isolate the line. Timely detection is integral to fault protection and the management of transmission lines in power systems. This paper focuses on using the threshold current and voltage to reduce the time of delay and trip time of the instantaneous overcurrent relay protection for a 330 kV transmission line. The wavelet transforms toolbox from MATLAB and a Simulink model were used to design the model to detect the threshold value and the coordination time for the backup relay to trip if the primary relay did not operate or clear the fault on time. The difference between the proposed model and the model without the threshold value was analysed. The simulated result shows that the trip time of the two relays demonstrates a fast and precise trip time of 60% to 99.87% compared to other techniques used without the threshold values. The proposed model can eliminate the trial-and-error in programming the instantaneous overcurrent relay setting for optimal performance.
]]>Signals doi: 10.3390/signals4010006
Authors: Constantina Isaia Michalis P. Michaelides
In recent years, tremendous advances have been made in the design and applications of wireless networks and embedded sensors. The combination of sophisticated sensors with wireless communication has introduced new applications, which can simplify humans’ daily activities, increase independence, and improve quality of life. Although numerous positioning techniques and wireless technologies have been introduced over the last few decades, there is still a need for improvements, in terms of efficiency, accuracy, and performance for the various applications. Localization importance increased even more recently, due to the coronavirus pandemic, which made people spend more time indoors. Improvements can be achieved by integrating sensor fusion and combining various wireless technologies for taking advantage of their individual strengths. Integrated sensing is also envisaged in the coming technologies, such as 6G. The primary aim of this review article is to discuss and evaluate the different wireless positioning techniques and technologies available for both indoor and outdoor localization. This, in combination with the analysis of the various discussed methods, including active and passive positioning, SLAM, PDR, integrated sensing, and sensor fusion, will pave the way for designing the future wireless positioning systems.
]]>Signals doi: 10.3390/signals4010005
Authors: Signals Editorial Office Signals Editorial Office
High-quality academic publishing is built on rigorous peer review [...]
]]>Signals doi: 10.3390/signals4010004
Authors: Athanasios Vavoulis Patricia Figueiredo Athanasios Vourvopoulos
Motor imagery (MI)-based brain–computer interfaces (BCI) have shown increased potential for the rehabilitation of stroke patients; nonetheless, their implementation in clinical practice has been restricted due to their low accuracy performance. To date, although a lot of research has been carried out in benchmarking and highlighting the most valuable classification algorithms in BCI configurations, most of them use offline data and are not from real BCI performance during the closed-loop (or online) sessions. Since rehabilitation training relies on the availability of an accurate feedback system, we surveyed articles of current and past EEG-based BCI frameworks who report the online classification of the movement of two upper limbs in both healthy volunteers and stroke patients. We found that the recently developed deep-learning methods do not outperform the traditional machine-learning algorithms. In addition, patients and healthy subjects exhibit similar classification accuracy in current BCI configurations. Lastly, in terms of neurofeedback modality, functional electrical stimulation (FES) yielded the best performance compared to non-FES systems.
]]>Signals doi: 10.3390/signals4010003
Authors: Mateusz Malarczyk Mateusz Zychlewicz Radoslaw Stanislawski Marcin Kaminski
This paper deals with the implementation of an adaptive speed controller applied for two electrical machines coupled by a long shaft. The two main parts of the study are the synthesis of the neural adaptive controller and hardware implementation using a low-cost system based on an STM Discovery board. The framework between the control system, the power converters, and the motors is established with an ARM device. A radial basis function neural network (RBFNN) is used as an adaptive speed controller. The net coefficients are updated (online mode) to ensure high dynamics of the system and correct work under disturbance. The results contain transients achieved in simulations and experimental tests.
]]>Signals doi: 10.3390/signals4010002
Authors: Najeeb ur Rehman Malik Syed Abdul Rahman Abu-Bakar Usman Ullah Sheikh Asma Channa Nirvana Popescu
Human Action Recognition (HAR) is a branch of computer vision that deals with the identification of human actions at various levels including low level, action level, and interaction level. Previously, a number of HAR algorithms have been proposed based on handcrafted methods for action recognition. However, the handcrafted techniques are inefficient in case of recognizing interaction level actions as they involve complex scenarios. Meanwhile, the traditional deep learning-based approaches take the entire image as an input and later extract volumes of features, which greatly increase the complexity of the systems; hence, resulting in significantly higher computational time and utilization of resources. Therefore, this research focuses on the development of an efficient multi-view interaction level action recognition system using 2D skeleton data with higher accuracy while reducing the computation complexity based on deep learning architecture. The proposed system extracts 2D skeleton data from the dataset using the OpenPose technique. Later, the extracted 2D skeleton features are given as an input directly to the Convolutional Neural Networks and Long Short-Term Memory (CNN-LSTM) architecture for action recognition. To reduce the complexity, instead of passing the whole image, only extracted features are given to the CNN-LSTM architecture, thus eliminating the need for feature extraction. The proposed method was compared with other existing methods, and the outcomes confirm the potential of the proposed technique. The proposed OpenPose-CNNLSTM achieved an accuracy of 94.4% for MCAD (Multi-camera action dataset) and 91.67% for IXMAS (INRIA Xmas Motion Acquisition Sequences). Our proposed method also significantly decreases the computational complexity by reducing the number of inputs features to 50.
]]>Signals doi: 10.3390/signals4010001
Authors: Md. Golam Sarower Rayhan M. Khalid Hasan Khan Mahfuza Tahsin Shoily Habibur Rahman Md. Rakibur Rahman Md. Tusar Akon Mahfuzul Hoque Md. Rayhan Khan Tanvir Rayhan Rifat Fahmida Akter Tisha Ibrahim Hossain Sumon Abdul Wahab Fahim Mohammad Abbas Uddin Abu Sadat Muhammad Sayem
Conductive textiles have found notable applications as electrodes and sensors capable of detecting biosignals like the electrocardiogram (ECG), electrogastrogram (EGG), electroencephalogram (EEG), and electromyogram (EMG), etc; other applications include electromagnetic shielding, supercapacitors, and soft robotics. There are several classes of materials that impart conductivity, including polymers, metals, and non-metals. The most significant materials are Polypyrrole (PPy), Polyaniline (PANI), Poly(3,4-ethylenedioxythiophene) (PEDOT), carbon, and metallic nanoparticles. The processes of making conductive textiles include various deposition methods, polymerization, coating, and printing. The parameters, such as conductivity and electromagnetic shielding, are prerequisites that set the benchmark for the performance of conductive textile materials. This review paper focuses on the raw materials that are used for conductive textiles, various approaches that impart conductivity, the fabrication of conductive materials, testing methods of electrical parameters, and key technical applications, challenges, and future potential.
]]>Signals doi: 10.3390/signals3040054
Authors: Loris Nanni Luca Trambaiollo Sheryl Brahnam Xiang Guo Chancellor Woolsey
Multilabel learning goes beyond standard supervised learning models by associating a sample with more than one class label. Among the many techniques developed in the last decade to handle multilabel learning best approaches are those harnessing the power of ensembles and deep learners. This work proposes merging both methods by combining a set of gated recurrent units, temporal convolutional neural networks, and long short-term memory networks trained with variants of the Adam optimization approach. We examine many Adam variants, each fundamentally based on the difference between present and past gradients, with step size adjusted for each parameter. We also combine Incorporating Multiple Clustering Centers and a bootstrap-aggregated decision trees ensemble, which is shown to further boost classification performance. In addition, we provide an ablation study for assessing the performance improvement that each module of our ensemble produces. Multiple experiments on a large set of datasets representing a wide variety of multilabel tasks demonstrate the robustness of our best ensemble, which is shown to outperform the state-of-the-art.
]]>Signals doi: 10.3390/signals3040053
Authors: Wei-Chang Yeh Chia-Ling Huang Haw-Sheng Wu
The construction of intelligent logistics by intelligent wireless sensing is a modern trend. Hence, this study uses the multistate flow network (MFN) to explore the actual environment of logistics delivery and to consider the different types of transportation routes available for logistics trucks in today’s practical environment, which have been neglected in previous studies. Two road types, namely highways and slow roads, with different speed limits are explored. The speed of the truck is fast on the highway, so the completion time of the single delivery is, of course, fast. However, it is also because of its high speed that it is subject to many other conditions. For example, if the turning angle of the truck is too large, there will be a risk of the truck overturning, which is a quite serious and important problem that must be included as a constraint. Moreover, highways limit the weight of trucks, so this limit is also included as a constraint. On the other hand, if the truck is driving on a slow road, where its speed is much slower than that of a highway, it is not limited by the turning angle. Nevertheless, regarding the weight capacity of trucks, although the same type of trucks running on slow roads can carry a weight capacity that is higher than the load weight limit of driving on the highway, slow roads also have a load weight limit. In addition to a truck’s aforementioned turning angle and load weight capacity, in today’s logistics delivery, time efficiency is extremely important, so the delivery completion time is also included as a constraint. Therefore, this study uses the improved d-MP method to study the reliability of logistics delivery in trucks driving on two types of roads under constraints to help enhance the construction of intelligent logistics with intelligent wireless sensing. An illustrative example in an actual environment is introduced.
]]>Signals doi: 10.3390/signals3040052
Authors: Xiaochao Dang Kefeng Wei Zhanjun Hao Zhongyu Ma
This paper uses millimeter-wave radar to recognize gestures in four different scene domains. The four scene domains are the experimental environment, the experimental location, the experimental direction, and the experimental personnel. The experiments are carried out in four scene domains, using part of the data of a scene domain as the training set for training. The remaining data is used as a validation set to validate the training results. Furthermore, the gesture recognition results of known scenes can be extended to unknown stages after obtaining the original gesture data in different scene domains. Then, three kinds of hand gesture features independent of the scene domain are extracted: range-time spectrum, range-doppler spectrum, and range-angle spectrum. Then, they are fused to represent a complete and comprehensive gesture action. Then, the gesture is trained and recognized using the three-dimensional convolutional neural network (CNN) model. Experimental results show that the three-dimensional CNN can fuse different gesture feature sets. The average recognition rate of the fused gesture features in the same scene domain is 87%, and the average recognition rate in the unknown scene domain is 83.1%, which verifies the feasibility of gesture recognition across scene domains.
]]>Signals doi: 10.3390/signals3040051
Authors: Ioannis G. Tsoulos Alexandros Tzallas Dimitrios Tsalikakis
In this paper, a new sampling technique is proposed that can be used in the Multistart global optimization technique as well as techniques based on it. The new method takes a limited number of samples from the objective function and then uses them to train an Radial Basis Function (RBF) neural network. Subsequently, several samples were taken from the artificial neural network this time, and those with the smallest network value in them are used in the global optimization method. The proposed technique was applied to a wide range of objective functions from the relevant literature and the results were extremely promising.
]]>Signals doi: 10.3390/signals3040050
Authors: Konstantinos Marakakis Georgios K. Tairidis Georgia A. Foutsitzi Nikolaos A. Antoniadis Georgios E. Stavroulakis
In this study, a new method for the optimal design of multimode shunt-damping circuits is presented. A modification of the “current-flowing” shunt circuit is employed to control multiple vibration modes of a piezoelectric laminate beam. In addition to the resistor damping components, the method considers the capacitances and the shunting branch inductors as new design variables. The H∞ norm of the damped system is minimized using the particle swarm optimization (PSO) method in the suggested optimization strategy. Two additional numerical models are addressed in order to compare the proposed method with other methods from the literature and to thoroughly examine the effect of the design variables on damping performance. To simulate the dynamic behavior of the piezoelectric composite beam, a finite-element model is created which provides more accurate modeling of thick beam structures. Results show that the suggested method may improve damping efficiency when compared to other models, since it generates a highest peak amplitude reduction of 39.61 dB for the second mode and 55.92 dB for the third mode. Finally, another benefit provided by the suggested optimal design is the reduction of the required shunt inductance values.
]]>Signals doi: 10.3390/signals3040049
Authors: Carson Ezell Alexandre Lazarian Abraham Loeb
The risk of a catastrophic or existential disaster for our civilization is increasing this century. A significant motivation for a near-term space settlement is the opportunity to safeguard civilization in the event of a planetary-scale disaster. A catastrophic event could destroy the significant cultural, scientific, and technological progress on Earth. However, early space settlements can preserve records of human activity by maintaining a backup data storage system. The backup can also store information about the events leading up to the disaster. The system would improve the ability of early space settlers to recover our civilization after collapse. We show that advances in laser communications and data storage enable the development of a data storage system on the lunar surface with a sufficient uplink data rate and storage capacity to preserve valuable information about the achievements of our civilization and the chronology of the disaster.
]]>Signals doi: 10.3390/signals3040048
Authors: Kiromitis I. Dimitrios Christos V. Bellos Konstantinos A. Stefanou Georgios S. Stergios Ioannis Andrikos Thomas Katsantas Sotirios Kontogiannis
This paper presents a machine-learning approach for detecting swarming events. Three different classification algorithms are tested: The k-Nearest Neighbors algorithm (k-NN) and Support Vector Machine (SVM), and a newly proposed by the authors, U-Net Convolutional Neural Network (CNN), developed for biomedical image segmentation. Next, the authors present their experimental scenario of collecting audio data of swarming and non-swarming events and evaluating the results from the k-NN and SVM classifiers and their proposed CNN algorithm. Finally, the authors compare these three methods and present the cross-comparison results of the optimal method for early and late/close-to-the-event detection of swarming.
]]>Signals doi: 10.3390/signals3040047
Authors: Asmita Korde-Patel Richard K. Barry Tinoosh Mohsenin
Compressive sensing is a simultaneous data acquisition and compression technique, which can significantly reduce data bandwidth, data storage volume, and power. We apply this technique for transient photometric events. In this work, we analyze the effect of noise on the detection of these events using compressive sensing (CS). We show numerical results on the impact of source and measurement noise on the reconstruction of transient photometric curves, generated due to gravitational microlensing events. In our work, we define source noise as background noise, or any inherent noise present in the sampling region of interest. For our models, measurement noise is defined as the noise present during data acquisition. These results can be generalized for any transient photometric CS measurements with source noise and CS data acquisition measurement noise. Our results show that the CS measurement matrix properties have an effect on CS reconstruction in the presence of source noise and measurement noise. We provide potential solutions for improving the performance by tuning some of the properties of the measurement matrices. For source noise applications, we show that choosing a measurement matrix with low mutual coherence can lower the amount of error caused due to CS reconstruction. Similarly, for measurement noise addition, we show that by choosing a lower expected value of the binomial measurement matrix, we can lower the amount of error due to CS reconstruction.
]]>Signals doi: 10.3390/signals3040046
Authors: Alwin Poulose
Visible light communication (VLC ) is an emerging research area in wireless communication. The system works the same way as optical fiber-based communication systems. However, the VLC system uses free space as its transmission medium. The invention of the light-emitting diode (LED) significantly updated the technologies used in modern communication systems. In VLC, the LED acts as a transmitter and sends data in the form of light when the receiver is in the line of sight (LOS) condition. The VLC system sends data by blinking the light at high speed, which is challenging to identify by human eyes. The detector receives the flashlight at high speed and decodes the transmitted data. One significant advantage of the VLC system over other communication systems is that it is easy to implement using an LED and a photodiode or phototransistor. The system is economical, compact, inexpensive, small, low power, prevents radio interference, and eliminates the need for broadcast rights and buried cables. In this paper, we investigate the performance of an indoor VLC system using Optisystem simulation software. We simulated an indoor VLC system using LOS and non-line-of-sight (NLOS) propagation models. Our simulation analyzes the LOS propagation model by considering the direct path with a single LED as a transmitter. The NLOS propagation model-based VLC system analyses two scenarios by considering single and dual LEDs as its transmitter. The effect of incident and irradiance angles in an LOS propagation model and an eye diagram of LOS/NLOS models are investigated to identify the signal distortion. We also analyzed the impact of the field of view (FOV) of an NLOS propagation model using a single LED as a transmitter and estimated the bitrate (Rb). Our theoretical results show that the system simulated in this paper achieved bitrates in the range of 2.1208×107 to 4.2147×107 bits/s when the FOV changes from 30∘ to 90∘. A VLC hardware design is further considered for real-time implementations. Our VLC hardware system achieved an average of 70% data recovery rate in the LOS propagation model and a 40% data recovery rate in the NLOS propagation model. This paper’s analysis shows that our simulated VLC results are technically beneficial in real-world VLC systems.
]]>Signals doi: 10.3390/signals3040045
Authors: Jonathan Piper Peter W. T. Yuen David James
In recent years, a wide range of hyperspectral imaging systems using coded apertures have been proposed. Many implement compressive sensing to achieve faster acquisition of a hyperspectral data cube, but it is also potentially beneficial to use coded aperture imaging in sensors that capture full-rank (non-compressive) measurements. In this paper we analyse the signal-to-noise ratio for such a sensor, which uses a Hadamard code pattern of slits instead of the single slit of a typical pushbroom imaging spectrometer. We show that the coded slit sensor may have performance advantages in situations where the dominant noise sources do not depend on the signal level; but that where Shot noise dominates a conventional single-slit sensor would be more effective. These results may also have implications for the utility of compressive sensing systems.
]]>Signals doi: 10.3390/signals3040044
Authors: Vasileios Christou Ioannis Tsoulos Alexandros Arjmand Dimitrios Dimopoulos Dimitrios Varvarousis Alexandros T. Tzallas Christos Gogos Markos G. Tsipouras Evripidis Glavas Avraam Ploumis Nikolaos Giannakeas
Hemiplegia is a condition caused by brain injury and affects a significant percentage of the population. The effect of patients suffering from this condition is a varying degree of weakness, spasticity, and motor impairment to the left or right side of the body. This paper proposes an automatic feature selection and construction method based on grammatical evolution (GE) for radial basis function (RBF) networks that can classify the hemiplegia type between patients and healthy individuals. The proposed algorithm is tested in a dataset containing entries from the accelerometer sensors of the RehaGait mobile gait analysis system, which are placed in various patients’ body parts. The collected data were split into 2-second windows and underwent a manual pre-processing and feature extraction stage. Then, the extracted data are presented as input to the proposed GE-based method to create new, more efficient features, which are then introduced as input to an RBF network. The paper’s experimental part involved testing the proposed method with four classification methods: RBF network, multi-layer perceptron (MLP) trained with the Broyden–Fletcher–Goldfarb–Shanno (BFGS) training algorithm, support vector machine (SVM), and a GE-based parallel tool for data classification (GenClass). The test results revealed that the proposed solution had the highest classification accuracy (90.07%) compared to the other four methods.
]]>Signals doi: 10.3390/signals3040043
Authors: Christos G. Panagiotopoulos Spyros Kouzoupis Chrysoula Tsogka
Time reversal has been demonstrated to be effective for source and novelty detection and localization. We extend here previous work in the case of a coupled structural-acoustic system, to which we refer to as vibro-acoustic. In this case, novelty means a change that the structural system has undergone and which we seek to detect and localize. A single source in the acoustic medium is used to generate the propagating field, and several receivers, both in the acoustic and the structural part, may be used to record the response of the medium to this excitation. This is the forward step. Exploiting time reversibility, the recorded signals are focused back to the original source location during the backward step. For the case of novelty detection, the difference between the field recorded before and after the structural modification is backpropagated. We demonstrate that the performance of the method is improved when the structural components are taken into account during the backward step. The potential of the method for solving inverse problems as they appear in non destructive testing and structural health monitoring applications is illustrated with several numerical examples obtained using a finite element method.
]]>Signals doi: 10.3390/signals3040042
Authors: Nikolaos Anastasopoulos Evangelos Dermatas
Grammatical inference of context-free grammars using positive and negative language examples is among the most challenging task in modern artificial and natural language technology. Recently, several implementations combining various techniques, usually including the Backus–Naur form, have been proposed. In this paper, we explore a new implementation of grammatical inference using evolution methods focused on the Greibach normal form and exploiting its properties, and also propose new solutions both in the evolutionary processes and in the corresponding fitness estimation.
]]>Signals doi: 10.3390/signals3040041
Authors: Jacob Holtom Andrew Herschfelt Isabella Lenz Owen Ma Hanguang Yu Daniel W. Bliss
Validating RF applications is traditionally time consuming, even for relatively simple systems. We developed the WISCA Software-Defined Radio Network (WISCANet) to accelerate the implementation and validation of radio applications over-the-air (OTA). WISCANet is a hardwareagnostic control software that automatically configures and controls a software-defined radio (SDR) network. By abstracting the hardware controls away from the user, WISCANet allows a non-expert user to deploy an OTA application by simply defining a baseband processing chain in a high level language. This technology reduces transition time between system design and OTA deployment, accelerates debugging and validation processes, and makes OTA experimentation more accessible to users that are not radio hardware experts. WISCANet emulates real-time RF operations, enabling users to perform real-time experiments without the typical restrictions on processing speed and hardware capabilities. WISCANet also supports multiple RF front-ends (RFFEs) per compute node, allowing sub-6 and mmWave systems to coexist on the same node. This coexistence enables simultaneous baseband processing that simplifies and enhances advanced algorithms and beyond-5G applications. In this study, we highlight the capabilities of WISCANet in several sub-6 and mmWave over-the-air demonstrations. The open source release of this software may be found on the WISCA GitHub page.
]]>Signals doi: 10.3390/signals3040040
Authors: Fei He Andrew Harms Lamar Yaoqing Yang
This paper presents a novel method of tensor rank regularization with bias compensation for channel estimation in a hybrid millimeter wave MIMO-OFDM system. Channel estimation is challenging due to the unknown number of multipath components that determines the channel rank. In general, finding the intrinsic rank of a tensor is a non-deterministic polynomial-time (NP) hard problem. However, by leveraging the sparse characteristics of millimeter wave channels, we propose a modified CANDECOMP/PARAFAC (CP) decomposition-based method that jointly estimates the tensor rank and channel component matrices. Our approach differs from most existing works that assume the number of channel paths is known and the proposed method is able to estimate channel parameters accurately without the prior knowledge of number of multipaths. The objective of this work is to estimate the tensor rank by a novel sparsity-promoting prior that is incorporated into a standard alternating least squares (ALS) function. We introduce a weighting parameter to control the impact of the previous estimate and the tensor rank estimation bias compensation in the regularized ALS. The channel information is then extracted from the estimated component matrices. Simulation results show that the proposed scheme outperforms the baseline l1 strategy in terms of accuracy and robustness. It also shows that this method significantly improves rank estimation success at the expense of slightly more iterations.
]]>Signals doi: 10.3390/signals3030039
Authors: Allan de Lima Samuel Carvalho Douglas Mota Dias Enrique Naredo Joseph P. Sullivan Conor Ryan
GRAPE is an implementation of Grammatical Evolution (GE) in DEAP, an Evolutionary Computation framework in Python, which consists of the necessary classes and functions to evolve a population of grammar-based solutions, while reporting essential measures. This tool was developed at the Bio-computing and Developmental Systems (BDS) Research Group, the birthplace of GE, as an easy to use (compared to the canonical C++ implementation, libGE) tool that inherits all the advantages of DEAP, such as selection methods, parallelism and multiple search techniques, all of which can be used with GRAPE. In this paper, we address some problems to exemplify the use of GRAPE and to perform a comparison with PonyGE2, an existing implementation of GE in Python. The results show that GRAPE has a similar performance, but is able to avail of all the extra facilities and functionality found in the DEAP framework. We further show that GRAPE enables GE to be applied to systems identification problems and we demonstrate this on two benchmark problems.
]]>Signals doi: 10.3390/signals3030038
Authors: Raj Mouli Jujjavarapu Alwin Poulose
Micro-processor designs have become a revolutionary technology almost in every industry. They brought the reality of automation and also electronic gadgets. While trying to improvise these hardware modules to handle heavy computational loads, they have substantially reached a limit in size, power efficiency, and similar avenues. Due to these constraints, many manufacturers and corporate entities are trying many ways to optimize these mini beasts. One such approach is to design microprocessors based on the specified operating system. This approach came to the limelight when many companies launched their microprocessors. In this paper, we will look into one method of using an arithmetic logic unit (ALU) module for internet of things (IoT)-enabled devices. A specific set of operations is added to the classical ALU to help fast computational processes in IoT-specific programs. We integrated a compression module and a fast multiplier based on the Vedic algorithm in the 16-bit ALU module. The designed ALU module is also synthesized under a 32-nm HVT cell library from the Synopsys database to generate an overview of the areal efficiency, logic levels, and layout of the designed module; it also gives us a netlist from this database. The synthesis provides a complete overview of how the module will be manufactured if sent to a foundry.
]]>Signals doi: 10.3390/signals3030037
Authors: Nikolaos Anastasopoulos Ioannis G. Tsoulos Evangelos Dermatas Evangelos Karvounis
In this paper, a novel Elman-type recurrent neural network (RNN) is presented for the binary classification of arbitrary symbol sequences, and a novel training method, including both evolutionary and local search methods, is evaluated using sequence databases from a wide range of scientific areas. An efficient, publicly available, software tool is implemented in C++, accelerating significantly (more than 40 times) the RNN weights estimation process using both simd and multi-thread technology. The experimental results, in all databases, with the hybrid training method show improvements in a range of 2% to 25% compared with the standard genetic algorithm.
]]>Signals doi: 10.3390/signals3030036
Authors: Panagiotis A. Karkazis Konstantinos Railis Stelios Prekas Panagiotis Trakadas Helen C. Leligou
Our contemporary society has never been more connected and aware of vital information in real time, through the use of innovative technologies. A considerable number of applications have transitioned into the cyber-physical domain, automating and optimizing their routines and processes via the dense network of sensing devices and the immense volumes of data they collect and instantly share. In this paper, we propose an innovative architecture based on the monitoring, analysis, planning, and execution (MAPE) paradigm for network and service performance optimization. Our study confirms distinct evidence that the utilization of learning algorithms, consuming datasets enriched with the users’ empirical opinions as input during the analysis and planning phases, contributes greatly to the optimization of video streaming quality, especially by handling different packet loss rates, paving the way for the achievable provision of a resilient communications platform for calamity assessment and management.
]]>Signals doi: 10.3390/signals3030035
Authors: Maximilian Grobbelaar Souvik Phadikar Ebrahim Ghaderpour Aaron F. Struck Nidul Sinha Rajdeep Ghosh Md. Zaved Iqubal Ahmed
Electroencephalogram (EEG) artifacts such as eyeblink, eye movement, and muscle movements widely contaminate the EEG signals. Those unwanted artifacts corrupt the information contained in the EEG signals and degrade the performance of qualitative analysis of clinical applications and as well as EEG-based brain–computer interfaces (BCIs). The applications of wavelet transform in denoising EEG signals are increasing day by day due to its capability of handling non-stationary signals. All the reported wavelet denoising techniques for EEG signals are surveyed in this paper in terms of the quality of noise removal and retrieving important information. In order to evaluate the performance of wavelet denoising techniques for EEG signals and to express the quality of reconstruction, the techniques were evaluated based on the results shown in the respective literature. We also compare certain features in the evaluation of the wavelet denoising techniques, such as the requirement of reference channel, automation, online, and performance on a single channel.
]]>Signals doi: 10.3390/signals3030034
Authors: Asmita Korde-Patel Richard K. Barry Tinoosh Mohsenin
In this work, we provide a compressive sensing architecture for implementing on a space based observatory for detecting transient photometric parallax caused by gravitational microlensing events. Compressive sensing (CS) is a simultaneous data acquisition and compression technique, which can greatly reduce on-board resources required for space flight data storage and ground transmission. We simulate microlensing parallax observations using a space observatory constellation, based on CS detectors. Our results show that average CS error is less than 0.5% using 25% Nyquist rate samples. The error at peak magnification time is significantly lower than the error for distinguishing any two microlensing parallax curves at their peak magnification. Thus, CS is an enabling technology for detecting microlensing parallax, without causing any loss in detection accuracy.
]]>Signals doi: 10.3390/signals3030033
Authors: Antonella Muroni Daniel Barbar Matteo Fraschini Marco Monticone Giovanni Defazio Francesco Marrosu
INTRODUCTION. Recent neuroimaging studies suggest that dental loss replacements induce changes in neuroplasticity as well as in correlated connectivity between brain networks. However, as the typical temporal delay in detecting brain activity by neuroimaging cannot account for the influence one neural system exerts over another in a context of real activation (“effective” connectivity), it seems of interest to approach this dynamic aspect of brain networking in the time frame of milliseconds by exploiting electroencephalographic (EEG) data. MATERIAL AND METHODS. The present study describes one subject who received a new prosthodontic provisional implant in substitution for previous dental repairs. Two EEG sessions led with a portable device were recorded before and after positioning the new dental implant. By following MATLAB-EEGLAB processing supported by the plugins FIELDTRIP and SIFT, the independent component analysis (ICA) derived from EEG raw signals was rendered as current density fields and interpolated with the dipoles generated by each electrode for a dynamic study of the effective connectivity. One more recording session was undertaken six months after the placement of the final implant. RESULTS. Compared to the baseline, the new prosthodontic implant induced a novel modulation of the neuroplasticity in sensory-motor areas which was maintained following the definitive implant after six months, as revealed by changes in the effective connectivity from the basal strong enslavement of a single brain area over the others, to an equilibrate inter-related connectivity evenly distributed along the frontotemporal regions of both hemispheres. CONCLUSIONS. The rapid shift of the effective connectivity after positioning the new prosthodontic implant and its substantial stability after six months suggest the possibility that synaptic modifications, induced by novel sensory motor conditions, modulate the neuroplasticity and reshape the final dynamic frame of the interarea connectivity. Moreover, given the viability of the EEG practice, this approach could be of some interest in assessing the association between oral pathophysiology and neuronal networking.
]]>Signals doi: 10.3390/signals3030032
Authors: Ahmad Droby Berat Kurar Barakat Reem Alaasam Boraq Madi Irina Rabaev Jihad El-Sana
Text line extraction is an essential preprocessing step in many handwritten document image analysis tasks. It includes detecting text lines in a document image and segmenting the regions of each detected line. Deep learning-based methods are frequently used for text line detection. However, only a limited number of methods tackle the problems of detection and segmentation together. This paper proposes a holistic method that applies Mask R-CNN for text line extraction. A Mask R-CNN model is trained to extract text lines fractions from document patches, which are further merged to form the text lines of an entire page. The presented method was evaluated on the two well-known datasets of historical documents, DIVA-HisDB and ICDAR 2015-HTR, and achieved state-of-the-art results. In addition, we introduce a new challenging dataset of Arabic historical manuscripts, VML-AHTE, where numerous diacritics are present. We show that the presented Mask R-CNN-based method can successfully segment text lines, even in such a challenging scenario.
]]>Signals doi: 10.3390/signals3030031
Authors: Marina Litvak Sarit Divekar Irina Rabaev
Plant classification requires the eye of an expert in botanics when the subtle differences in stem or petals differentiate between different species. Hence, an accurate automatic plant classification might be of great assistance to a person who studies agriculture, travels, or explores rare species. This paper focuses on a specific task of urban plants classification. The possible practical application of this work is a tool which assists people, growing plants at home, to recognize new species and to provide the relevant caring instructions. Because urban species are barely covered by the benchmark datasets, these species cannot be accurately recognized by the state-of-the-art pre-trained classification models. This paper introduces a new dataset, Urban Planter, for plant species classification with 1500 images categorized into 15 categories. The dataset contains 15 urban species, which can be grown at home in any climate (mostly desert) and are barely covered by existing datasets. We performed an extensive analysis of this dataset, aimed at answering the following research questions: (1) Does the Urban Planter dataset provide enough information to train accurate deep learning models? (2) Can pre-trained classification models be successfully applied on Urban Planter, and is the pre-training on ImageNet beneficial in comparison to the pre-training on a much smaller but more relevant dataset? (3) Does two-step transfer learning further improve the classification accuracy? We report the results of experiments designed to answer these questions. In addition, we provide the link to the installation code of the alpha version and the demo video of the web app for urban plants classification based on the best evaluated model. To conclude, our contribution is three-fold: (1) We introduce a new dataset of urban plant images; (2) We report the results of an extensive case study with several state-of-the-art deep networks and different configurations for transfer learning; (3) We provide a web application based on the best evaluated model. In addition, we believe that, by extending our dataset in the future to eatable plants and assisting people to grow food at home, our research contributes to achieve the United Nations’ 2030 Agenda for Sustainable Development.
]]>Signals doi: 10.3390/signals3030030
Authors: George Voudiotis Anna Moraiti Sotirios Kontogiannis
One of the most critical causes of colony collapse disorder in beekeeping is caused by the Varroa mite. This paper presents an embedded camera module supported by a deep learning algorithm for the process of early detecting of Varroa infestations. This is achieved using a deep learning algorithm that tries to identify bees inside the brood frames carrying the mite in real-time. The end-node device camera module is placed inside the brood box. It is equipped with offline detection in remote areas of limited network coverage or online imagery data transmission and mite detection over the cloud. The proposed deep learning algorithm uses a deep learning network for bee object detection and an image processing step to identify the mite on the previously detected objects. Finally, the authors present their proof of concept experimentation of their approach that can offer a total bee and varroa detection accuracy of close to 70%. The authors present in detail and discuss their experimental results.
]]>Signals doi: 10.3390/signals3030029
Authors: Yang Yang Shihao Sun Ao Chen Siyang You Yuqi Shen Zhijun Li Dayang Sun
The ranging error model is generally very complicated in actual ranging technologies. This paper gives an analysis of the biased distance substitution and proposes an unbiased multilateral positioning method to revise the biased substitution, making it an unbiased estimate of the squared distance. An unbiased estimate of the multilateral positioning formula is derived to solve the target node coordinates. Through simulation experiments, it is proved that the algorithm can improve the positioning accuracy, and the improvement is more obvious when the error variance is larger. Experiments using SX1280 also show that the ranging conforms to the biased error model, and the accuracy can be improved by using the unbiased estimator. When the actual experimental error standard deviation is 0.16 m, the accuracy can be improved by 0.15 m.
]]>Signals doi: 10.3390/signals3030028
Authors: Domonkos Varga
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible with human judgements. Full-reference image quality assessment algorithms, which have full access to the distortion-free images, usually contain two phases: local image quality estimation and pooling. Previous works have utilized visual saliency in the final pooling stage. In addition to this, visual saliency was utilized as weights in the weighted averaging of local image quality scores, emphasizing image regions that are salient to human observers. In contrast to this common practice, visual saliency is applied in the computation of local image quality in this study, based on the observation that local image quality is determined both by local image degradation and visual saliency simultaneously. Experimental results on KADID-10k, TID2013, TID2008, and CSIQ have shown that the proposed method was able to improve the state-of-the-art’s performance at low computational costs.
]]>Signals doi: 10.3390/signals3030027
Authors: Vincent Nsed Ogar Sajjad Hussain Kelum A. A. Gamage
Transmission line fault classification forms the basis of fault protection management in power systems. Because faults have adverse effects on transmission lines, adequate measures must be implemented to avoid power outages. This paper focuses on using the categorical boosting (CatBoost) algorithm classifier to analyse and train multiple voltage and current data from a 330 kV and 500 km-long simulated faulty transmission line model designed using Matlab/Simulink. From it, 93,340 fault data sizes were extracted. The CatBoost classifier was employed to classify the faults after different machine learning algorithms were used to train the same data with different parameters. The trainer achieved the best accuracy of 99.54%, with an error of 0.46% for 748 iterations out of 1000. The algorithm was selected for its high performance in classifying faults based on accuracy, precision and speed. In addition, it is easy to use and handles multiple data-sets. In contrast, a support vector machine and an artificial neural network each has a longer training time than the proposed method’s 58.5 s. Proper fault classification techniques assist in the effective fault management and planning of power system control thereby preventing energy waste and providing high performance.
]]>Signals doi: 10.3390/signals3030026
Authors: Yiming Huo
Recent years have seen unprecedentedly fast-growing prosperity in the commercial space industry. Several privately funded aerospace manufacturers, such as Space Exploration Technologies Corporation (SpaceX) and Blue Origin have transformed what we used to know about this capital-intense industry and gradually reshaped the future of human civilization. As private spaceflight and multi-planetary immigration gradually become realities from science fiction (sci-fi) and theory, both opportunities and challenges will be presented. In this article, we first review the progress in space exploration and the underlying space technologies. Next, we revisit the K-Pg extinction event and the Chelyabinsk event and predict extra-terrestrialization, terraformation, and planetary defense, including the emerging near-Earth object (NEO) observation and NEO impact avoidance technologies and strategies. Furthermore, a framework for the Solar Communication and Defense Networks (SCADN) with advanced algorithms and high efficacy is proposed to enable an Internet of distributed deep-space sensing, communications, and defense to cope with disastrous incidents such as asteroid/comet impacts. Furthermore, perspectives on the legislation, management, and supervision of founding the proposed SCADN are also discussed in depth.
]]>Signals doi: 10.3390/signals3020025
Authors: Robert Friedman
The nematode worm Caenorhabditis elegans has a relatively simple neural system for analysis of information transmission from sensory organ to muscle fiber. Consequently, this study includes an example of a neural circuit from the nematode worm, and a procedure is shown for measuring its information optimality by use of a logic gate model. This approach is useful where the assumptions are applicable for a neural circuit, and also for choosing between competing mathematical hypotheses that explain the function of a neural circuit. In this latter case, the logic gate model can estimate computational complexity and distinguish which of the mathematical models require fewer computations. In addition, the concept of information optimality is generalized to other biological systems, along with an extended discussion of its role in genetic-based pathways of organisms.
]]>Signals doi: 10.3390/signals3020024
Authors: Ana S. Santos Cardoso Rasmus L. Kæseler Mads Jochumsen Lotte N. S. Andreasen Struijk
Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control of a robotic arm. A proof-of-concept system was devised using parieto-occipital alpha wave modulation and a cyclic menu with auditory cues. Users choose a movement to be executed and asynchronously stop said action when necessary. Tolerance intervals allowed users to cancel or confirm actions. Eight able-bodied subjects used the system to perform a pick-and-place task. To investigate the potential learning effects, the experiment was conducted twice over the course of two consecutive days. Subjects obtained satisfactory completion rates (84.0 ± 15.0% and 74.4 ± 34.5% for the first and second day, respectively) and high path efficiency (88.9 ± 11.7% and 92.2 ± 9.6%). Subjects took on average 439.7 ± 203.3 s to complete each task, but the robot was only in motion 10% of the time. There was no significant difference in performance between both days. The developed control scheme provided users with intuitive control, but a considerable amount of time is spent waiting for the right target (auditory cue). Implementing other brain signals may increase its speed.
]]>Signals doi: 10.3390/signals3020023
Authors: Houda Harkat Paulo Monteiro Atilio Gameiro Fernando Guiomar Hasmath Farhana Thariq Ahmed
MIMO-OFDM is a key technology and a strong candidate for 5G telecommunication systems. In the literature, there is no convenient survey study that rounds up all the necessary points to be investigated concerning such systems. The current deeper review paper inspects and interprets the state of the art and addresses several research axes related to MIMO-OFDM systems. Two topics have received special attention: MIMO waveforms and MIMO-OFDM channel estimation. The existing MIMO hardware and software innovations, in addition to the MIMO-OFDM equalization techniques, are discussed concisely. In the literature, only a few authors have discussed the MIMO channel estimation and modeling problems for a variety of MIMO systems. However, to the best of our knowledge, there has been until now no review paper specifically discussing the recent works concerning channel estimation and the equalization process for MIMO-OFDM systems. Hence, the current work focuses on analyzing the recently used algorithms in the field, which could be a rich reference for researchers. Moreover, some research perspectives are identified.
]]>Signals doi: 10.3390/signals3020022
Authors: Loris Nanni Alessandra Lumini Andrea Loreggia Alberto Formaggio Daniela Cuza
Recognizing objects in images requires complex skills that involve knowledge about the context and the ability to identify the borders of the objects. In computer vision, this task is called semantic segmentation and it pertains to the classification of each pixel in an image. The task is of main importance in many real-life scenarios: in autonomous vehicles, it allows the identification of objects surrounding the vehicle; in medical diagnosis, it improves the ability of early detecting of dangerous pathologies and thus mitigates the risk of serious consequences. In this work, we propose a new ensemble method able to solve the semantic segmentation task. The model is based on convolutional neural networks (CNNs) and transformers. An ensemble uses many different models whose predictions are aggregated to form the output of the ensemble system. The performance and quality of the ensemble prediction are strongly connected with some factors; one of the most important is the diversity among individual models. In our approach, this is enforced by adopting different loss functions and testing different data augmentations. We developed the proposed method by combining DeepLabV3+, HarDNet-MSEG, and Pyramid Vision Transformers. The developed solution was then assessed through an extensive empirical evaluation in five different scenarios: polyp detection, skin detection, leukocytes recognition, environmental microorganism detection, and butterfly recognition. The model provides state-of-the-art results.
]]>Signals doi: 10.3390/signals3020021
Authors: Jaume Anguera Alejandro Fernández Carles Puente Aurora Andújar Jaap Groot
Antennas should be small enough to fit in the limited space of IoT devices and, at the same time, with multi-band operation across several bands as well as ensure stability when embedded in a device. In this regard, two different technologies are compared: antenna booster and flexible printed circuit antenna. A comparison is addressed from measured results in terms of efficiency, concluding that despite the antenna booster is more than fifty times smaller in area, it provides better efficiency across the frequency range of 698–960 MHz and 1710–2690 MHz across three different printed circuit boards (PCB): a big PCB of 131 mm × 60 mm, a medium PCB of 95 mm × 42 mm, and a small PCB of 65 mm × 42 mm. Moreover, the flexible printed antenna depends on the mounting process, whereas the antenna booster does not.
]]>Signals doi: 10.3390/signals3020020
Authors: Acacio M. R. Amaral Antonio J. Marques Cardoso
This paper presents a Python-based simulation technique that can be used to predict the behavior of switch-mode non-isolated (SMNI) DC-DC converters operating in closed loop. The proposed technique can be implemented in an open-source numerical computation software, such as Scilab, Octave or Python, which makes it versatile and portable. The software that will be used to implement the proposed technique is Python, since it is an open-source programming language, unlike MATLAB, which is one of most-used programming and numeric computing platforms to simulate this type of system. The proposed technique requires the discretization of the equations that govern the open-loop operation of the converter, as well as the discretization of the transfer function of the controller. To simplify the implementation of the simulation technique, the code must be subdivided into different modules, which together form a package. The converter under analysis will be a buck converter operating in CCM. The proposed technique can be extended to any other SMNI DC-DC converter. The validation of the proposed technique will be carried out by comparing it with the results obtained in LTspice.
]]>Signals doi: 10.3390/signals3020019
Authors: Muhammad Saqib Abbas Anwar Saeed Anwar Lars Petersson Nabin Sharma Michael Blumenstein
Deep learning in the last decade has been very successful in computer vision and machine learning applications. Deep learning networks provide state-of-the-art performance in almost all of the applications where they have been employed. In this review, we aim to summarize the essential deep learning techniques and then apply them to COVID-19, a highly contagious viral infection that wreaks havoc on everyone’s lives in various ways. According to the World Health Organization and scientists, more testing potentially helps contain the virus’s spread. The use of chest radiographs is one of the early screening tests for determining disease, as the infection affects the lungs severely. To detect the COVID-19 infection, this experimental survey investigates and automates the process of testing by employing state-of-the-art deep learning classifiers. Moreover, the viruses are of many types, such as influenza, hepatitis, and COVID. Here, our focus is on COVID-19. Therefore, we employ binary classification, where one class is COVID-19 while the other viral infection types are treated as non-COVID-19 in the radiographs. The classification task is challenging due to the limited number of scans available for COVID-19 and the minute variations in the viral infections. We aim to employ current state-of-the-art CNN architectures, compare their results, and determine whether deep learning algorithms can handle the crisis appropriately and accurately. We train and evaluate 34 models. We also provide the limitations and future direction.
]]>Signals doi: 10.3390/signals3020018
Authors: Maximilian Achim Pfeffer Sai Ho Ling
Detecting pulmonary nodules early significantly contributes to the treatment success of lung cancer. Several deep learning models for medical image analysis have been developed to help classify pulmonary nodules. The design of convolutional neural network (CNN) architectures, however, is still heavily reliant on human domain knowledge. Manually designing CNN design solutions has been shown to limit the data’s utility by creating a co-dependency on the creator’s cognitive bias, which urges the development of smart CNN architecture design solutions. In this paper, an evolutionary algorithm is used to optimise the classification of pulmonary nodules with CNNs. The implementation of a genetic algorithm (GA) for CNN architectures design and hyperparameter optimisation is proposed, which approximates optimal solutions by implementing a range of bio-inspired mechanisms of natural selection and Darwinism. For comparison purposes, two manually designed deep learning models, FractalNet and Deep Local-Global Network, were trained. The results show an outstanding classification accuracy of the fittest GA-CNN (91.3%), which outperformed both manually designed models. The findings indicate that GAs pose advantageous solutions for diagnostic challenges, the development of which may to be fully automated in the future using GAs to design and optimise CNN architectures for various clinical applications.
]]>Signals doi: 10.3390/signals3020017
Authors: Yuchen Huang Wei Li Zhiyang Dou Wantong Zou Anye Zhang Zan Li
Millimeter-wave radar has demonstrated its high efficiency in complex environments in recent years, which outperforms LiDAR and computer vision in human activity recognition in the presence of smoke, fog, and dust. In previous studies, researchers mostly analyzed either 2D (3D) point cloud or range–Doppler information from radar echo to extract activity features. In this paper, we propose a multi-model deep learning approach to fuse the features of both point clouds and range–Doppler for classifying six activities, i.e., boxing, jumping, squatting, walking, circling, and high-knee lifting, based on a millimeter-wave radar. We adopt a CNN–LSTM model to extract the time-serial features from point clouds and a CNN model to obtain the features from range–Doppler. Then we fuse the two features and input the fused feature into the full connected layer for classification. We built a dataset based on a 3D millimeter-wave radar from 17 volunteers. The evaluation result based on the dataset shows that this method has higher accuracy than utilizing the two kinds of information separately and achieves a recognition accuracy of 97.26%, which is about 1% higher than other networks with only one kind of data as input.
]]>Signals doi: 10.3390/signals3020016
Authors: Francesca Gasparini Alessandra Grossi Marta Giltri Stefania Bandini
Physiological responses are currently widely used to recognize the affective state of subjects in real-life scenarios. However, these data are intrinsically subject-dependent, making machine learning techniques for data classification not easily applicable due to inter-subject variability. In this work, the reduction of inter-subject heterogeneity was considered in the case of Photoplethysmography (PPG), which was successfully used to detect stress and evaluate experienced cognitive load. To face the inter-subject heterogeneity, a novel personalized PPG normalization is herein proposed. A subject-normalized discrete domain where the PPG signals are properly re-scaled is introduced, considering the subject’s heartbeat frequency in resting state conditions. The effectiveness of the proposed normalization was evaluated in comparison to other normalization procedures in a binary classification task, where cognitive load and relaxed state were considered. The results obtained on two different datasets available in the literature confirmed that applying the proposed normalization strategy permitted increasing the classification performance.
]]>Signals doi: 10.3390/signals3020015
Authors: Evangelos D. Spyrou Ioannis Tsoulos Chrysostomos Stylios
Air pollution is a major problem in the everyday life of citizens, especially air pollution in the transport domain. Ships play a significant role in coastal air pollution, in conjunction with transport mobility in the broader area of ports. As such, ports should be monitored in order to assess air pollution levels and act accordingly. In this paper, we obtain CO values from environmental sensors that were installed in the broader area of the port of Igoumenitsa in Greece. Initially, we analysed the CO values and we have identified some extreme values in the dataset that showed a potential event. Thereafter, we separated the dataset into 6-h intervals and showed that we have an extremely high rise in certain hours. We transformed the dataset to a moving average dataset, with the objective being the reduction of the extremely high values. We utilised a machine-learning algorithm, namely the univariate long short-term memory (LSTM) algorithm to provide the predicted outcome of the time series from the port that has been collected. We performed experiments by using 100, 1000, and 7000 batches of data. We provided results on the model loss and the root-mean-square error as well as the mean absolute error. We showed that with the case with batch number equals to 7000, the LSTM we achieved a good prediction outcome. The proposed method was compared with the ARIMA model and the comparison results prove the merit of the approach.
]]>Signals doi: 10.3390/signals3020014
Authors: Milton A. Garcés Daniel Bowman Cleat Zeiler Anthony Christe Tyler Yoshiyama Brian Williams Meritxell Colet Samuel Takazawa Sarah Popenhagen
A smartphone plummeted from a stratospheric height of 36 km, providing a near-real-time record of its rapid descent and ground impact. An app recorded and streamed useful internal multi-sensor data at high sample rates. Signal fusion with external and internal sensor systems permitted a more detailed reconstruction of the Skyfall chronology, including its descent speed, rotation rate, and impact deceleration. Our results reinforce the potential of smartphones as an agile and versatile geophysical data collection system for environmental and disaster monitoring IoT applications. We discuss mobile environmental sensing capabilities and present a flexible data model to record and stream signals of interest. The Skyfall case study can be used as a guide to smartphone signal processing methods that are transportable to other hardware platforms and operating systems.
]]>Signals doi: 10.3390/signals3020013
Authors: Ahmed Badr Abeer Badawi Abdulmonem Rashwan Khalid Elgazzar
This work presents XBeats, a novel platform for real-time electrocardiogram monitoring and analysis that uses edge computing and machine learning for early anomaly detection. The platform encompasses a data acquisition ECG patch with 12 leads to collect heart signals, perform on-chip processing, and transmit the data to healthcare providers in real-time for further analysis. The ECG patch provides a dynamically configurable selection of the active ECG leads that could be transmitted to the backend monitoring system. The selection ranges from a single ECG lead to a complete 12-lead ECG testing configuration. XBeats implements a lightweight binary classifier for early anomaly detection to reduce the time to action should abnormal heart conditions occur. This initial detection phase is performed on the edge (i.e., the device paired with the patch) and alerts can be configured to notify designated healthcare providers. Further deep analysis can be performed on the full fidelity 12-lead data sent to the backend. A fully functional prototype of the XBeats has been implemented to demonstrate the feasibly and usability of the proposed system. Performance evaluation shows that XBeats can achieve up to 95.30% detection accuracy for abnormal conditions, while maintaining a high data acquisition rate of up to 441 samples per second. Moreover, the analytical results of the energy consumption profile show that the ECG patch provides up to 37 h of continuous 12-lead ECG streaming.
]]>Signals doi: 10.3390/signals3020012
Authors: Ioannis G. Tsoulos
A hybrid procedure that incorporates grammatical evolution and a weight decaying technique is proposed here for various classification and regression problems. The proposed method has two main phases: the creation of features and the evaluation of these features. During the first phase, using grammatical evolution, new features are created as non-linear combinations of the original features of the datasets. In the second phase, based on the characteristics of the first phase, the original dataset is modified and a neural network trained with a genetic algorithm is applied to this dataset. The proposed method was applied to an extremely wide set of datasets from the relevant literature and the experimental results were compared with four other techniques.
]]>Signals doi: 10.3390/signals3020011
Authors: Nursyuhada Binti Haji Kadir Joseph K. Muguro Kojiro Matsushita Senanayake Mudiyanselaga Namal Arosha Senanayake Minoru Sasaki
Due to impaired mobility caused by aging, it is very important to employ early detection and monitoring of gait parameters to prevent the inevitable huge amount of medical cost at a later age. For gait training and potential tele-monitoring application outside clinical settings, low-cost yet highly reliable gait analysis systems are needed. This research proposes using a single LiDAR system to perform automatic gait analysis with polynomial fitting. The experimental setup for this study consists of two different walking speeds, fast walk and normal walk, along a 5-m straight line. There were ten test subjects (mean age 28, SD 5.2) who voluntarily participated in the study. We performed polynomial fitting to estimate the step length from the heel projection cloud point laser data as the subject walks forwards and compared the values with the visual inspection method. The results showed that the visual inspection method is accurate up to 6 cm while the polynomial method achieves 8 cm in the worst case (fast walking). With the accuracy difference estimated to be at most 2 cm, the polynomial method provides reliability of heel location estimation as compared with the observational gait analysis. The proposed method in this study presents an improvement accuracy of 4% as opposed to the proposed dual-laser range sensor method that reported 57.87 cm ± 10.48, an error of 10%. Meanwhile, our proposed method reported ±0.0633 m, a 6% error for normal walking.
]]>