Next Issue
Volume 3, March
Previous Issue
Volume 2, September
 
 

Signals, Volume 2, Issue 4 (December 2021) – 14 articles

Cover Story (view full-size image): Semantic segmentation is a very popular topic in modern computer vision, and it has applications in many fields. Researchers have proposed a variety of architectures for semantic image segmentation. The most common ones exploit an encoder–decoder structure that aims to capture the semantics of the image and its low-level features. The encoder uses convolutional layers, in general with a stride larger than one, to extract the features, while the decoder recreates the image by upsampling and using skip connections with the first layers. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
16 pages, 751 KiB  
Article
Sensor-Based Prediction of Mental Effort during Learning from Physiological Data: A Longitudinal Case Study
by Ankita Agarwal, Josephine Graft, Noah Schroeder and William Romine
Signals 2021, 2(4), 886-901; https://doi.org/10.3390/signals2040051 - 3 Dec 2021
Cited by 2 | Viewed by 2134
Abstract
Trackers for activity and physical fitness have become ubiquitous. Although recent work has demonstrated significant relationships between mental effort and physiological data such as skin temperature, heart rate, and electrodermal activity, we have yet to demonstrate their efficacy for the forecasting of mental [...] Read more.
Trackers for activity and physical fitness have become ubiquitous. Although recent work has demonstrated significant relationships between mental effort and physiological data such as skin temperature, heart rate, and electrodermal activity, we have yet to demonstrate their efficacy for the forecasting of mental effort such that a useful mental effort tracker can be developed. Given prior difficulty in extracting relationships between mental effort and physiological responses that are repeatable across individuals, we make the case that fusing self-report measures with physiological data within an internet or smartphone application may provide an effective method for training a useful mental effort tracking system. In this case study, we utilized over 90 h of data from a single participant over the course of a college semester. By fusing the participant’s self-reported mental effort in different activities over the course of the semester with concurrent physiological data collected with the Empatica E4 wearable sensor, we explored questions around how much data were needed to train such a device, and which types of machine-learning algorithms worked best. We concluded that although baseline models such as logistic regression and Markov models provided useful explanatory information on how the student’s physiology changed with mental effort, deep-learning algorithms were able to generate accurate predictions using the first 28 h of data for training. A system that combines long short-term memory and convolutional neural networks is recommended in order to generate smooth predictions while also being able to capture transitions in mental effort when they occur in the individual using the device. Full article
(This article belongs to the Special Issue Sensor Fusion and Statistical Signal Processing)
Show Figures

Figure 1

23 pages, 8186 KiB  
Article
Effect of Exposure Time on Thermal Behaviour: A Psychophysiological Approach
by Bilge Kobas, Sebastian Clark Koth, Kizito Nkurikiyeyezu, Giorgos Giannakakis and Thomas Auer
Signals 2021, 2(4), 863-885; https://doi.org/10.3390/signals2040050 - 2 Dec 2021
Cited by 8 | Viewed by 3276
Abstract
This paper presents the findings of a 6-week long, five-participant experiment in a controlled climate chamber. The experiment was designed to understand the effect of time on thermal behaviour, electrodermal activity (EDA) and the adaptive behavior of occupants in response to a thermal [...] Read more.
This paper presents the findings of a 6-week long, five-participant experiment in a controlled climate chamber. The experiment was designed to understand the effect of time on thermal behaviour, electrodermal activity (EDA) and the adaptive behavior of occupants in response to a thermal non-uniform indoor environment were continuously logged. The results of the 150 h-long longitudinal study suggested a significant difference in tonic EDA levels between “morning” and “afternoon” clusters although the environmental parameters were the same, suggesting a change in the human body’s thermal reception over time. The correlation of the EDA and temperature was greater for the afternoon cluster (r = 0.449, p < 0.001) in relation to the morning cluster (r = 0.332, p < 0.001). These findings showed a strong temporal dependency of the skin conductance level of the EDA to the operative temperature, following the person’s circadian rhythm. Even further, based on the person’s chronotype, the beginning of the “afternoon” cluster was observed to have shifted according to the person’s circadian rhythm. Furthermore, the study is able to show how the body reacts differently under the same PMV values, both within and between subjects; pointing to the lack of temporal parameter in the PMV model. Full article
(This article belongs to the Special Issue Biosignals Processing and Analysis in Biomedicine)
Show Figures

Figure 1

11 pages, 2118 KiB  
Article
A Neural Network Model for Estimating the Heart Rate Response to Constant Intensity Exercises
by Maria S. Zakynthinaki, Theodoros N. Kapetanakis, Anna Lampou, Melina P. Ioannidou and Ioannis O. Vardiambasis
Signals 2021, 2(4), 852-862; https://doi.org/10.3390/signals2040049 - 2 Dec 2021
Cited by 2 | Viewed by 2175
Abstract
Estimating the heart rate (HR) response to exercises of a given intensity without the need of direct measurement is an open problem of great interest. We propose here a model that can estimate the heart rate response to exercise of constant intensity and [...] Read more.
Estimating the heart rate (HR) response to exercises of a given intensity without the need of direct measurement is an open problem of great interest. We propose here a model that can estimate the heart rate response to exercise of constant intensity and its subsequent recovery, based on soft computing techniques. Multilayer perceptron artificial neural networks (NN) are implemented and trained using raw HR time series data. Our model’s input and output are the beat-to-beat time intervals and the HR values, respectively. The numerical results are very encouraging, as they indicate a mean relative square error of the estimated HR values of the order of 10−4 and an absolute error as low as 1.19 beats per minute, on average. Our model has also been proven to be superior when compared with existing mathematical models that predict HR values by numerical simulation. Our study concludes that our NN model can efficiently predict the HR response to any constant exercise intensity, a fact that can have many important applications, not only in the area of medicine and cardio-vascular health, but also in the areas of rehabilitation, general fitness, and competitive sport. Full article
Show Figures

Figure 1

18 pages, 5744 KiB  
Article
Development of Surface EMG Game Control Interface for Persons with Upper Limb Functional Impairments
by Joseph K. Muguro, Pringgo Widyo Laksono, Wahyu Rahmaniar, Waweru Njeri, Yuta Sasatake, Muhammad Syaiful Amri bin Suhaimi, Kojiro Matsushita, Minoru Sasaki, Maciej Sulowicz and Wahyu Caesarendra
Signals 2021, 2(4), 834-851; https://doi.org/10.3390/signals2040048 - 12 Nov 2021
Cited by 5 | Viewed by 3205
Abstract
In recent years, surface Electromyography (sEMG) signals have been effectively applied in various fields such as control interfaces, prosthetics, and rehabilitation. We propose a neck rotation estimation from EMG and apply the signal estimate as a game control interface that can be used [...] Read more.
In recent years, surface Electromyography (sEMG) signals have been effectively applied in various fields such as control interfaces, prosthetics, and rehabilitation. We propose a neck rotation estimation from EMG and apply the signal estimate as a game control interface that can be used by people with disabilities or patients with functional impairment of the upper limb. This paper utilizes an equation estimation and a machine learning model to translate the signals into corresponding neck rotations. For testing, we designed two custom-made game scenes, a dynamic 1D object interception and a 2D maze scenery, in Unity 3D to be controlled by sEMG signal in real-time. Twenty-two (22) test subjects (mean age 27.95, std 13.24) participated in the experiment to verify the usability of the interface. From object interception, subjects reported stable control inferred from intercepted objects more than 73% accurately. In a 2D maze, a comparison of male and female subjects reported a completion time of 98.84 s. ± 50.2 and 112.75 s. ± 44.2, respectively, without a significant difference in the mean of the one-way ANOVA (p = 0.519). The results confirmed the usefulness of neck sEMG of sternocleidomastoid (SCM) as a control interface with little or no calibration required. Control models using equations indicate intuitive direction and speed control, while machine learning schemes offer a more stable directional control. Control interfaces can be applied in several areas that involve neck activities, e.g., robot control and rehabilitation, as well as game interfaces, to enable entertainment for people with disabilities. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing)
Show Figures

Figure 1

14 pages, 1602 KiB  
Article
Deep Ensembles Based on Stochastic Activations for Semantic Segmentation
by Alessandra Lumini, Loris Nanni and Gianluca Maguolo
Signals 2021, 2(4), 820-833; https://doi.org/10.3390/signals2040047 - 11 Nov 2021
Cited by 3 | Viewed by 2144
Abstract
Semantic segmentation is a very popular topic in modern computer vision, and it has applications in many fields. Researchers have proposed a variety of architectures for semantic image segmentation. The most common ones exploit an encoder–decoder structure that aims to capture the semantics [...] Read more.
Semantic segmentation is a very popular topic in modern computer vision, and it has applications in many fields. Researchers have proposed a variety of architectures for semantic image segmentation. The most common ones exploit an encoder–decoder structure that aims to capture the semantics of the image and its low-level features. The encoder uses convolutional layers, in general with a stride larger than one, to extract the features, while the decoder recreates the image by upsampling and using skip connections with the first layers. The objective of this study is to propose a method for creating an ensemble of CNNs by enhancing diversity among networks with different activation functions. In this work, we use DeepLabV3+ as an architecture to test the effectiveness of creating an ensemble of networks by randomly changing the activation functions inside the network multiple times. We also use different backbone networks in our DeepLabV3+ to validate our findings. A comprehensive evaluation of the proposed approach is conducted across two different image segmentation problems: the first is from the medical field, i.e., polyp segmentation for early detection of colorectal cancer, and the second is skin detection for several different applications, including face detection, hand gesture recognition, and many others. As to the first problem, we manage to reach a Dice coefficient of 0.888, and a mean intersection over union (mIoU) of 0.825, in the competitive Kvasir-SEG dataset. The high performance of the proposed ensemble is confirmed in skin detection, where the proposed approach is ranked first concerning other state-of-the-art approaches (including HarDNet) in a large set of testing datasets. Full article
Show Figures

Figure 1

17 pages, 533 KiB  
Review
CS Measures for Nuclear Power Plant Protection: A Systematic Literature Review
by Nabin Chowdhury
Signals 2021, 2(4), 803-819; https://doi.org/10.3390/signals2040046 - 4 Nov 2021
Cited by 4 | Viewed by 2777
Abstract
As digital instrumentation in Nuclear Power Plants (NPPs) is becoming increasingly complex, both attack vectors and defensive strategies are evolving based on new technologies and vulnerabilities. Continued efforts have been made to develop a variety of measures for the cyber defense of these [...] Read more.
As digital instrumentation in Nuclear Power Plants (NPPs) is becoming increasingly complex, both attack vectors and defensive strategies are evolving based on new technologies and vulnerabilities. Continued efforts have been made to develop a variety of measures for the cyber defense of these infrastructures, which often consist in adapting security measures previously developed for other critical infrastructure sectors according to the requirements of NPPs. That being said, due to the very recent development of these solutions, there is a lack of agreement or standardization when it comes to their adoption at an industrial level. To better understand the state of the art in NPP Cyber-Security (CS) measures, in this work, we conduct a Systematic Literature Review (SLR) to identify scientific papers discussing CS frameworks, standards, guidelines, best practices, and any additional CS protection measures for NPPs. From our literature analysis, it was evidenced that protecting the digital space in NPPs involves three main steps: (i) identification of critical digital assets; (ii) risk assessment and threat analysis; (iii) establishment of measures for NPP protection based on the defense-in-depth model. To ensure the CS protection of these infrastructures, a holistic defense-in-depth approach is suggested in order to avoid excessive granularity and lack of compatibility between different layers of protection. Additional research is needed to ensure that such a model is developed effectively and that it is based on the interdependencies of all security requirements of NPPs. Full article
(This article belongs to the Special Issue Critical Infrastructures Cybersecurity and Resilience)
Show Figures

Figure 1

32 pages, 5136 KiB  
Review
Towards Integration of Security and Safety Measures for Critical Infrastructures Based on Bayesian Networks and Graph Theory: A Systematic Literature Review
by Sandeep Pirbhulal, Vasileios Gkioulos and Sokratis Katsikas
Signals 2021, 2(4), 771-802; https://doi.org/10.3390/signals2040045 - 2 Nov 2021
Cited by 9 | Viewed by 3328
Abstract
In recent times, security and safety are, at least, conducted in safety-sensitive or critical sectors. Nevertheless, both processes do not commonly analyze the impact of security risks on safety. Several scholars are focused on integrating safety and security risk assessments, using different methodologies [...] Read more.
In recent times, security and safety are, at least, conducted in safety-sensitive or critical sectors. Nevertheless, both processes do not commonly analyze the impact of security risks on safety. Several scholars are focused on integrating safety and security risk assessments, using different methodologies and tools in critical infrastructures (CIs). Bayesian networks (BN) and graph theory (GT) have received much attention from academia and industries to incorporate security and safety features for different CI applications. Hence, this study aims to conduct a systematic literature review (SLR) for co-engineering safety and security using BN or GT. In this SLR, the preferred reporting items for systematic reviews and meta-analyses recommendations (PRISMA) are followed. Initially, 2295 records (acquired between 2011 and 2020) were identified for screening purposes. Later on, 240 articles were processed to check eligibility criteria. Overall, this study includes 64 papers, after examining the pre-defined criteria and guidelines. Further, the included studies were compared, regarding the number of required nodes for system development, applied data sources, research outcomes, threat actors, performance verification mechanisms, implementation scenarios, applicability and functionality, application sectors, advantages, and disadvantages for combining safety, and security measures, based on GT and BN. The findings of this SLR suggest that BN and GT are used widely for risk and failure management in several domains. The highly focused sectors include studies of the maritime industry (14%), vehicle transportation (13%), railway (13%), nuclear (6%), chemical industry (6%), gas and pipelines (5%), smart grid (5%), network security (5%), air transportation (3%), public sector (3%), and cyber-physical systems (3%). It is also observed that 80% of the included studies use BN models to incorporate safety and security concerns, whereas 15% and 5% for GT approaches and joint GT and BN methodologies, respectively. Additionally, 31% of identified studies verified that the developed approaches used real-time implementation, whereas simulation or preliminary analysis were presented for the remaining methods. Finally, the main research limitations, concluding remarks and future research directions, are presented Full article
(This article belongs to the Special Issue Critical Infrastructures Cybersecurity and Resilience)
Show Figures

Figure 1

17 pages, 2555 KiB  
Article
Language and Reasoning by Entropy Fractals
by Daniela López De Luise
Signals 2021, 2(4), 754-770; https://doi.org/10.3390/signals2040044 - 2 Nov 2021
Cited by 1 | Viewed by 2297
Abstract
Like many other brain productions, language is a complex tool that helps individuals to communicate with each other. Many studies from computational linguistics aim to exhibit and understand the structures and content production. At present, a large list of contributions can describe and [...] Read more.
Like many other brain productions, language is a complex tool that helps individuals to communicate with each other. Many studies from computational linguistics aim to exhibit and understand the structures and content production. At present, a large list of contributions can describe and manage it with different levels of precision and applicability, but there is still a requirement for generative purposes. This paper is focused on stating the roots to understand language production from a combination of entropy and fractals. It is part of a larger work on seven rules that are intended to help build sentences automatically, in the context of dialogs with humans. As part of the scope of this paper, a set of dialogs are outlined and pre-processed. Three of the thermodynamic rules of language production are introduced and applied. Also, the communication implications and statistical evaluation are presented. From the results, a final analysis suggests that the exploration of fractals explanations of the entropy and entropy perspectives could provide a prospective insight for automatic sentence generation in natural language. Full article
Show Figures

Figure 1

25 pages, 2217 KiB  
Article
IMU-Based Hand Gesture Interface Implementing a Sequence-Matching Algorithm for the Control of Assistive Technologies
by Frédéric Schweitzer and Alexandre Campeau-Lecours
Signals 2021, 2(4), 729-753; https://doi.org/10.3390/signals2040043 - 21 Oct 2021
Cited by 1 | Viewed by 2517
Abstract
Assistive technologies (ATs) often have a high-dimensionality of possible movements (e.g., assistive robot with several degrees of freedom or a computer), but the users have to control them with low-dimensionality sensors and interfaces (e.g., switches). This paper presents the development of an open-source [...] Read more.
Assistive technologies (ATs) often have a high-dimensionality of possible movements (e.g., assistive robot with several degrees of freedom or a computer), but the users have to control them with low-dimensionality sensors and interfaces (e.g., switches). This paper presents the development of an open-source interface based on a sequence-matching algorithm for the control of ATs. Sequence matching allows the user to input several different commands with low-dimensionality sensors by not only recognizing their output, but also their sequential pattern through time, similarly to Morse code. In this paper, the algorithm is applied to the recognition of hand gestures, inputted using an inertial measurement unit worn by the user. An SVM-based algorithm, that is aimed to be robust, with small training sets (e.g., five examples per class) is developed to recognize gestures in real-time. Finally, the interface is applied to control a computer’s mouse and keyboard. The interface was compared against (and combined with) the head movement-based AssystMouse software. The hand gesture interface showed encouraging results for this application but could also be used with other body parts (e.g., head and feet) and could control various ATs (e.g., assistive robotic arm and prosthesis). Full article
Show Figures

Figure 1

23 pages, 987 KiB  
Article
On the Quality of Deep Representations for Kepler Light Curves Using Variational Auto-Encoders
by Francisco Mena, Patricio Olivares, Margarita Bugueño, Gabriel Molina and Mauricio Araya
Signals 2021, 2(4), 706-728; https://doi.org/10.3390/signals2040042 - 14 Oct 2021
Cited by 2 | Viewed by 3191
Abstract
Light curve analysis usually involves extracting manually designed features associated with physical parameters and visual inspection. The large amount of data collected nowadays in astronomy by different surveys represents a major challenge of characterizing these signals. Therefore, finding good informative representation for them [...] Read more.
Light curve analysis usually involves extracting manually designed features associated with physical parameters and visual inspection. The large amount of data collected nowadays in astronomy by different surveys represents a major challenge of characterizing these signals. Therefore, finding good informative representation for them is a key non-trivial task. Some studies have tried unsupervised machine learning approaches to generate this representation without much effectiveness. In this article, we show that variational auto-encoders can learn these representations by taking the difference between successive timestamps as an additional input. We present two versions of such auto-encoders: Variational Recurrent Auto-Encoder plus time (VRAEt) and re-Scaling Variational Recurrent Auto Encoder plus time (S-VRAEt). The objective is to achieve the most likely low-dimensional representation of the time series that matched latent variables and, in order to reconstruct it, should compactly contain the pattern information. In addition, the S-VRAEt embeds the re-scaling preprocessing of the time series into the model in order to use the Flux standard deviation in the learning of the light curves structure. To assess our approach, we used the largest transit light curve dataset obtained during the 4 years of the Kepler mission and compared to similar techniques in signal processing and light curves. The results show that the proposed methods obtain improvements in terms of the quality of the deep representation of phase-folded transit light curves with respect to their deterministic counterparts. Specifically, they present a good balance between the reconstruction task and the smoothness of the curve, validated with the root mean squared error, mean absolute error, and auto-correlation metrics. Furthermore, there was a good disentanglement in the representation, as validated by the Pearson correlation and mutual information metrics. Finally, a useful representation to distinguish categories was validated with the F1 score in the task of classifying exoplanets. Moreover, the S-VRAEt model increases all the advantages of VRAEt, achieving a classification performance quite close to its maximum model capacity and generating light curves that are visually comparable to a Mandel–Agol fit. Thus, the proposed methods present a new way of analyzing and characterizing light curves. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing)
Show Figures

Figure 1

18 pages, 761 KiB  
Article
A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors
by Rajesh Thomas, Victor DeBrunner and Linda S. DeBrunner
Signals 2021, 2(4), 688-705; https://doi.org/10.3390/signals2040041 - 11 Oct 2021
Cited by 3 | Viewed by 3074
Abstract
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the [...] Read more.
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the requisite added memory for temporary storage. In this paper, we present a method for computing a DFT via a natively real-valued algorithm that is computationally equivalent to a N=2k-length DFT (where k is a positive integer), and is substantially more efficient for any other length, N. Our method uses the eigenstructure of the DFT, and the fact that sparse, real-valued, eigenvectors can be found and used to advantage. Computation using our method uses only vector dot products and vector-scalar products. Full article
Show Figures

Figure 1

26 pages, 7713 KiB  
Article
Bearing Prognostics: An Instance-Based Learning Approach with Feature Engineering, Data Augmentation, and Similarity Evaluation
by Jun Sun and Qiao Sun
Signals 2021, 2(4), 662-687; https://doi.org/10.3390/signals2040040 - 10 Oct 2021
Viewed by 2426
Abstract
We propose an instance-based learning approach with data augmentation and similarity evaluation to estimate the remaining useful life (RUL) of a mechanical component for health management. The publicly available PRONOSTIA datasets, which provide accelerated degradation test data for bearings, are used in our [...] Read more.
We propose an instance-based learning approach with data augmentation and similarity evaluation to estimate the remaining useful life (RUL) of a mechanical component for health management. The publicly available PRONOSTIA datasets, which provide accelerated degradation test data for bearings, are used in our study. The challenges with the datasets include a very limited number of run-to-failure examples, no failure mode information, and a wide range of bearing life spans. Without a large number of training samples, feature engineering is necessary. Principal component analysis is applied to the spectrogram of vibration signals to obtain prognostic feature sequences. A data augmentation strategy is developed to generate synthetic prognostic feature sequences using learning instances. Subsequently, similarities between the test and learning instances can be assessed using a root mean squared (RMS) difference measure. Finally, an ensemble method is developed to aggregate the RUL estimates based on multiple similar prognostic feature sequences. The proposed approach demonstrates comparable performance with published solutions in the literature. It serves as an alternative method for solving the RUL estimation problem. Full article
(This article belongs to the Special Issue Advanced Signal/Data Processing for Structural Health Monitoring)
Show Figures

Figure 1

25 pages, 3811 KiB  
Article
Blind Source Separation in Polyphonic Music Recordings Using Deep Neural Networks Trained via Policy Gradients
by Sören Schulze, Johannes Leuschner and Emily J. King
Signals 2021, 2(4), 637-661; https://doi.org/10.3390/signals2040039 - 7 Oct 2021
Cited by 3 | Viewed by 2375
Abstract
We propose a method for the blind separation of sounds of musical instruments in audio signals. We describe the individual tones via a parametric model, training a dictionary to capture the relative amplitudes of the harmonics. The model parameters are predicted via a [...] Read more.
We propose a method for the blind separation of sounds of musical instruments in audio signals. We describe the individual tones via a parametric model, training a dictionary to capture the relative amplitudes of the harmonics. The model parameters are predicted via a U-Net, which is a type of deep neural network. The network is trained without ground truth information, based on the difference between the model prediction and the individual time frames of the short-time Fourier transform. Since some of the model parameters do not yield a useful backpropagation gradient, we model them stochastically and employ the policy gradient instead. To provide phase information and account for inaccuracies in the dictionary-based representation, we also let the network output a direct prediction, which we then use to resynthesize the audio signals for the individual instruments. Due to the flexibility of the neural network, inharmonicity can be incorporated seamlessly and no preprocessing of the input spectra is required. Our algorithm yields high-quality separation results with particularly low interference on a variety of different audio samples, both acoustic and synthetic, provided that the sample contains enough data for the training and that the spectral characteristics of the musical instruments are sufficiently stable to be approximated by the dictionary. Full article
(This article belongs to the Special Issue Advances in Processing and Understanding of Music Signals)
Show Figures

Figure 1

18 pages, 2259 KiB  
Article
RedHerd: Offensive Cyberspace Operations as a Service
by Giovanni Pecoraro, Mario D’Amico and Simon Pietro Romano
Signals 2021, 2(4), 619-636; https://doi.org/10.3390/signals2040038 - 1 Oct 2021
Viewed by 3636
Abstract
Nowadays, time, scope and cost constraints along with knowledge requirements and personnel training constitute blocking restrictions for effective Offensive Cyberspace Operations (OCO). This paper presents RedHerd, an open-source, collaborative and serverless orchestration framework that overcomes these limitations. RedHerd leverages the ‘as a Service’ [...] Read more.
Nowadays, time, scope and cost constraints along with knowledge requirements and personnel training constitute blocking restrictions for effective Offensive Cyberspace Operations (OCO). This paper presents RedHerd, an open-source, collaborative and serverless orchestration framework that overcomes these limitations. RedHerd leverages the ‘as a Service’ paradigm in order to seamlessly deploy a ready-to-use infrastructure that can be also adopted for effective simulation and training purposes, by reliably reproducing a real-world cyberspace battlefield in which red and blue teams can challenge each other. We discuss both the design and implementation of the proposed solution, by focusing on its main functionality, as well as by highlighting how it perfectly fits the Open Systems Architecture design pattern, thanks to the adoption of both open standards and wide-spread open-source software components. The paper also presents a complete OCO simulation based on the usage of RedHerd to perform a fictitious attack and fully compromise an imaginary enterprise following the Cyber Kill Chain (CKC) phases. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop