Next Article in Journal
Digital Twins to Predict Crack Propagation of Sustainable Engineering Materials under Different Loads
Next Article in Special Issue
Combined Soft Grasping and Crawling Locomotor Robot for Exterior Navigation of Tubular Structures
Previous Article in Journal
Concept for Individual and Lifetime-Adaptive Modeling of the Dynamic Behavior of Machine Tools
Previous Article in Special Issue
Neuro-Cognitive Locomotion with Dynamic Attention on Topological Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electromyography-Based Biomechanical Cybernetic Control of a Robotic Fish Avatar

by
Manuel A. Montoya Martínez
1,†,
Rafael Torres-Córdoba
1,
Evgeni Magid
2,3 and
Edgar A. Martínez-García
1,*,†
1
Laboratorio de Robótica, Institute of Engineering and Technology, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez 32310, Mexico
2
Institute of Information Technology and Intelligent Systems, Kazan Federal University, Kazan 420008, Russia
3
HSE Tikhonov Moscow Institute of Electronics and Mathematics, HSE University, Moscow 101000, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Machines 2024, 12(2), 124; https://doi.org/10.3390/machines12020124
Submission received: 3 January 2024 / Revised: 5 February 2024 / Accepted: 6 February 2024 / Published: 9 February 2024
(This article belongs to the Special Issue Biorobotic Locomotion and Cybernetic Control)

Abstract

:
This study introduces a cybernetic control and architectural framework for a robotic fish avatar operated by a human. The behavior of the robot fish is influenced by the electromyographic (EMG) signals of the human operator, triggered by stimuli from the surrounding objects and scenery. A deep artificial neural network (ANN) with perceptrons classifies the EMG signals, discerning the type of muscular stimuli generated. The research unveils a fuzzy-based oscillation pattern generator (OPG) designed to emulate functions akin to a neural central pattern generator, producing coordinated fish undulations. The OPG generates swimming behavior as an oscillation function, decoupled into coordinated step signals, right and left, for a dual electromagnetic oscillator in the fish propulsion system. Furthermore, the research presents an underactuated biorobotic mechanism of the subcarangiform type comprising a two-solenoid electromagnetic oscillator, an antagonistic musculoskeletal elastic system of tendons, and a multi-link caudal spine composed of helical springs. The biomechanics dynamic model and control for swimming, as well as the ballasting system for submersion and buoyancy, are deduced. This study highlights the utilization of EMG measurements encompassing sampling time and μ -volt signals for both hands and all fingers. The subsequent feature extraction resulted in three types of statistical patterns, namely, Ω , γ , λ , serving as inputs for a multilayer feedforward neural network of perceptrons. The experimental findings quantified controlled movements, specifically caudal fin undulations during forward, right, and left turns, with a particular emphasis on the dynamics of caudal fin undulations of a robot prototype.

1. Introduction

Avatar robotics involves remotely controlling a robot to interact with the physical environment on behalf of a human operator, enabling them to virtually embody the robot and perform actions as if physically present [1,2]. This transformative technology extends human presence to remote or hazardous locations, with applications spanning space exploration, disaster response, remote inspection, telemedicine, and diverse domains. Leveraging progress in robotics, teleoperation systems, sensory feedback interfaces, and communication networks, avatar robotics enhances human capabilities, ensures safer operations, and broadens human presence and expertise in various fields [3,4].
Furthermore, cybernetic control functions as a regulatory system utilizing feedback mechanisms to uphold stability and achieve desired outcomes [5]. The incorporation of feedback loops is central to cybernetic control systems, continuously monitoring a system’s behavior, comparing it to a reference state, and generating corrective actions to address any deviations. This iterative feedback process facilitates self-regulation and goal attainment within the system. The application domains of cybernetic control span engineering, biology, and psychology, with the goal of enabling robots to interact with humans in more intuitive ways [6]. This involves adapting their actions and responses based on human feedback and behavior, cultivating a more seamless and responsive human–robot interaction.
Cybernetic biorobotics, at its core, is an interdisciplinary frontier that harmonizes principles from cybernetics, biology, and robotics. Its primary mission is the exploration and development of robots or robotic systems intricately inspired by the marvels of biological organisms. This field is driven by the ambition to conceive robots capable of mimicking and integrating the sophisticated principles and behaviors observed in living entities. Researchers draw inspiration from the intricate control systems of biological organisms and this creative synthesis results in the creation of robots characterized by adaptive and intelligent behaviors, thus mirroring the intricacies found in the natural world. Bioinspired robotics, a central focus within this discipline, involves distilling the fundamental principles and behaviors intrinsic to biological entities and skillfully incorporating them into the design and control of robotic systems. It has the potential to advance the development of robots endowed with locomotion and manipulation capabilities akin to animals, as well as robots capable of adapting to dynamic environments or interacting with humans in more natural and intuitive ways [7]. Moreover, research in cybernetic biorobotics can offer valuable insights into comprehending biological systems, fostering advancements in disciplines like neuroscience and biomechanics. Furthermore, remote cybernetic robots may rely on haptic systems as essential interfaces. A haptic system, characterized by its ability to provide users with a sense of touch or tactile feedback through force, vibration, or other mechanical means, comprises a haptic interface and a haptic rendering system. Collaboratively, these components simulate touch sensations, enabling users to engage with virtual or remote environments in a tactile manner [8].
This research introduces a control and sensing architecture that integrates a cybernetic scheme based on the recognition of electromyographic control signals, governing a range of locomotive behaviors in a robotic fish. Conceptually, the human operator receives feedback signals from the sensors of the biorobotic avatar, conveying information about its remote environment. The proposed approach stands out due to its key features and contributions, which include:
  • The exposition of an innovative conceptual cybernetic fish avatar architecture.
  • The creation of an EMG data filtering algorithm, coupled with a method for extracting, classifying, and recognizing muscular patterns using a deep ANN, serves as a cybernetic interface for the governance of the fish avatar.
  • The development of a fuzzy-based oscillation pattern generator (OPG) designed to generate periodic oscillation patterns around the fish’s caudal fin. These coordinated oscillations are decoupled into right and left step functions, specifically crafted to input into a lateral pair of electromagnetic coils, thereby producing undulating swimming motions of the robot fish.
  • The conception of a bioinspired robotic fish mechanism is characterized by the incorporation of underactuated elements propelled by serial links featuring helical springs. This innovative design is empowered by a dual solenoid electromagnetic oscillator and a four-bar linkage, reflecting a novel approach to bioinspired robotics.
  • The derivation of closed-form control laws for both the undulation of the underactuated caudal multilink dynamics and the ballasting system.
Section 2 provides a comprehensive discussion of the comparative analysis of the current state of the art. Section 3 provides a detailed description of the proposed architecture of thecybernetic system model. In Section 4 presents an approach for filtering electromyography (EMG) data and delves into an in-depth discussion of a classifier based on deep ANN for the recognition of hand-motion EMG stimuli patterns. Section 5 presents the development of a fuzzy-based oscillation pattern generator. Section 6 details the robot’s mechanism parts and its dynamic model. Section 7 focuses on the development of a feedback control for the fish’s ballasting system. Finally, Section 8 provides the concluding remarks of the research study.

2. Analysis of the State of the Art

This section syntheses the relevant literature and provides insights into the current state of the art. Further, it aims to examine and evaluate the existing research and advancements in the field. This brief analysis identifies and compares different aspects, providing a comprehensive overview including the relevant research and advancements in the field about methodologies and outcomes.
Multiple basic concepts of cybernetics [9] at the intersection of physics, control theory, and molecular systems were presented in [10], where a speed-gradient approach to modeling the dynamics of physical systems is discussed. A novel research approach, namely Ethorobotics, proposes the use and development of advanced bioinspired robotic replicas as a method for investigating animal behavior [11]. In the domain of telepresence and teleoperation, diverse systems and methodologies have been devised to facilitate remote control of robots [12]. One such system is the multi-robot teleoperation system based on a brain–computer interface, as documented by [13]. This system aims to enable individuals with severe neuromuscular deficiencies to operate multiple robots solely through their brain activity, thus offering telepresence via a thought-based interaction mode. A comprehensive review addressing the existing teleoperation methods and techniques for enhancing the control of mobile robots has been presented by [14]. This review critically analyzes, categorizes, and summarizes the existing teleoperation methods for mobile robots while highlighting various enhancement techniques that have been employed. It makes clear the relative advantages and disadvantages associated with these methods and techniques. The field of telepresence and teleoperation robotics has witnessed substantial attention and interest over the past decade [15], finding extensive applications in healthcare, education, surveillance, disaster recovery, and corporate/government sectors. In the specific context of underwater robots, gesture recognition-based teleoperation systems have been developed to enable users to control the swimming behavior of these robots. Such systems foster direct interaction between onlookers and the robotic fish, thereby enhancing the intuitive experience of human–robot interaction. Furthermore, efforts have been made to enhance the consistency and quality of robotic fish tails through improved fabrication processes, and target tracking algorithms have been developed to enhance the tracking capabilities of these robots [16]. The study in [17] developed teleoperation for remote control of a robotic fish by hand-gesture recognition. It allowed direct interaction between onlookers and the biorobot. Another notable system is the assistive telepresence system employing augmented reality in conjunction with a physical robot, as detailed in the work by [18]. This system leverages an optimal non-iterative alignment solver to determine the optimally aligned pose of the 3D human model with the robot, resulting in faster computations compared to baseline solvers and delivering comparable or superior pose alignments. The review presented in [19] analyzes the progress of robot skin in multimodal sensing and machine perception for sensory feedback in feeling proximity, pressure, and temperature for collaborative robot applications considering immersive teleoperation and affective interaction. Reference [20] reported an advanced robotic avatar system designed for immersive teleoperation, having some key functions such as human-like manipulation and communication capabilities, immersive 3D visualization, and transparent force-feedback telemanipulation. Suitable human–robot collaboration in medical application has been reported [21], where force perception is augmented for the human operator during needle insertion in soft tissue. Telepresence of mobile robotic systems may incorporate remote video transmission to steer the robot by seeing through its eyes remotely. Reference [22] presented an overview including social application domains. Research has been conducted on the utilization of neural circuits to contribute to limb locomotion [23] in the presence of uncertainty. Optimizing the data showed the combination of circuits necessary for efficient locomotion. A review has also been conducted on central pattern generators (CPGs) employed for locomotion control in robots [24]. This review encompasses neurobiological observations, numerical models, and robotic applications of CPGs. Reference [25] describes an extended mathematical model of the CPG supported by two neurophysiological studies: identification of a two-layered CPG neural circuitry and a specific neural model for generating different patterns. The CPG model is used as the low-level controller of a robot to generate walking patterns, with the inclusion of an ANN as a layer of the CPG to produce rhythmic and non-rhythmic motion patterns. The work in [26] presented a review of bionic robotic fish, tackling major concepts in kinematics, control, learning, hydrodynamic forces, and critical concepts in locomotion coordination. The research presented in [27] reviews the human manual control of devices in cybernetics using mathematical models and advances of theory and applications, from linear time-invariant modeling of stationary conditions to methods and analyses of adaptive and time-varying cybernetics–human interactions in control tasks. A new foundations for cybernetics will emerge and impact numerous domains involving humans in manual and neuromuscular system modeling control.
Building upon the preceding analysis regarding the relevant literature, the subsequent table (Table 1) encapsulates the primary distinctions articulated in this study in relation to the most pertinent literature identified.
As delineated in Table 1, the present study introduces distinctive elements that set it apart from the recognized relevant literature. However, it is noteworthy to acknowledge that various multidisciplinary domains may exhibit commonalities. Across these diverse topics, shared elements encompass robotic avatars, teleoperation, telepresence, immersive human–robot interfaces, as well as haptic or cybernetic systems in different application domains. In this research, the fundamental principle of a robotic avatar entails controlling its swimming response to biological stimuli from the human operator. The human controller is able to gain insight into the surrounding world of the robotic fish avatar through a haptic interface. This interface allows the human operator to yield biological electromyography stimuli as the result of their visual and skin impressions (e.g., pressure, temperature, heading vibrations). The biorobotic fish generates its swimming locomotive behavior, which is governed by EMG stimuli yielded in real-time in the human. Through a neuro-fuzzy controller, the neuronal part (cybernetic observer) classifies the type of human EMG reaction, and the fuzzy part determines the swimming behavior.

3. Conceptual System Architecture

This section encompasses a comprehensive framework that highlights the integration of various components to propose a cohesive cybernetic robotic model. In addition, this section outlines the key concepts and elucidates their interactions within the system.
Figure 1 presents an overview of the key components constituting the proposed system architecture. This manuscript thoroughly explores the modeling of four integral elements: (i) the cybernetic human controller, employing ANN classification of EMG signals; (ii) a fuzzy-based locomotion pattern generator; (iii) an underactuated bioinspired robot fish; and (iv) the robot’s sensory system, contributing feedback for the haptic system. While we will discuss the relevance and impact of the latter item within the architecture, it is important to note that the detailed exploration of topics related to haptic development and wearable technological devices goes beyond the scope of this paper and will be addressed in future work. Nevertheless, we deduce the observable variables that serve as crucial inputs for the haptic system.
Essentially, there are six haptic feedback sensory inputs of interest for the human, representing the observable state of the avatar robot: Eulerian variables, including angular and linear displacements and their higher-order derivatives; biomechanical caudal motion; hydraulic pressure; scenario temperature; and passive vision. Figure 2 left provides an illustration of the geometric distribution of the sensing devices.
The instrumented robot is an embodiment of the human submerged in water, featuring an undulatory swimming mechanical body imbued with muscles possessing underactuated characteristics. These features empower the biorobotic avatar to execute movements and swim in its aquatic surroundings.
The observation models aim to provide insights into how these sensory perceptions are conveyed to the haptic helmet with haptic devices, including a wheel reaction mechanism. A comprehensive schema emerges wherein the haptic nexus, bolstered by pivotal human biosensorial components, including gravireceptors, Ruffini corpuscles, Paccinian receptors, and retinal photoreceptors, converges to interface with the sensory substrate of the human operator. Such a convergence engenders a cascading sequence wherein biological input stimuli coalesce to yield discernible encephalographic activity, the primary layer of subsequent electromyographic outputs. These consequential EMG outputs undergo processing in a swim oscillatory pattern generator, thereby embodying control of biomechanical cybernetic governance.
In accordance with Figure 1, it is noteworthy that the various haptic input variables, such as temperature, pressure, and the visual camera images, represent direct sensory measurements transmitted from the robotic fish to the components of the haptic interface. Conversely, the robotic avatar takes on the role of a thermosensory adept in order for the human to assess the ambient thermal landscape. Thus, from this discernment, a surrogate thermal approach is projected onto the tactile realm of the human operator through the modulation of thermally responsive plates enmeshed within the haptic interface. Therefore, a crafted replica of the temperature patterns detected by the robotic aquatic entity is seamlessly integrated into the human sensory experience. This intertwining of thermal emulation is reached by the network of Ruffini corpuscles, intricately nestled within the human skin, thereby enhancing the experiential authenticity of this multisensory convergence. As for interaction through the haptic functions, the Paccinian corpuscles function as discerning receptors, proficiently registering subtle haptic pressures. It finds its origin in the dynamic tactile signals inherent to the aquatic habitat, intricately associated with the underwater depth traversed by the robotic avatar. Integral to the comprehensive sensory scheme, the optical sensors housed within the robotic entity acquire visual data. These visual data are subsequently channeled to the human’s cognition through the haptic interface’s perceptual canvas. Within this, the human sensory apparatus assumes the role of an engaged receptor, duly transducing these visual envoys through the lattice of retinal photoreceptors.
Embedded within the robotic fish’s body, several inertial measurement units (IMU) play a pivotal role in quantifying Eulerian inclinations intrinsic to the aquatic environment. These intricate angular displacements are subsequently channeled to the human operator, thereby initiating an engagement of the reaction wheel mechanism. As a consequential outcome of this interplay, a synchronized emulation of tilting motion is induced, mirroring the nuanced cranial adjustments executed by the human operator. Any alignment of movements assumes perceptible form, relayed through the human’s network of gravireceptors nestled within the internal auditory apparatus. The Euler angular and linear speeds are not directly measured; instead, they must be integrated using various sensor fusion approaches to enhance the avatar’s fault tolerance in reading its environment. For example, the angular observations of the robot fish are obtained through numerical integro-differential equations, which are solved online as measurements are acquired. Let us introduce the following notation for inclinometers ( α i ) and accelerometers ( a a ), with the singular direct sensor measurement α ˙ g derived from the gyroscopes. The observation for the fish’s roll velocity combining the three inertial sensors is
ω α = d α ι d t + α g ˙ + 1 d α t a α d t ,
while the pitch velocity is modelled by
ω β = d β ι d t + β g ˙ + 1 d β t a β d t ,
and the yaw velocity is obtained by
ω γ = d γ ι d t + γ g ˙ + 1 d γ t a γ d t .
Within this context, the tangential accelerations experienced by the robot body are denoted a α , β , γ [m/s2]. Additionally, the angular velocities measured by the gyroscopes are represented by α ˙ , β ˙ , γ ˙ [rad/s2]. Correspondingly, the inclinometers provide angle measurements denoted α , β , γ [rad]. These measurements collectively contribute to the comprehensive observability and characterization of the robot’s dynamic behavior and spatial orientation.
Furthermore, the oscillations of the caudal tail are reflections of the dynamics of the underactuated spine. These dynamics are captured by quantifying encoder pulses, denoted η t , which provide precise angular positions for each vertebra. Given that real-time angular measurements of the vertebrae are desired, higher-order data are prioritized. Consequently, derivatives are computed by initiating from the Taylor series to approximate the angle of each vertebral element with respect to time, denoted t.
ϕ i ϕ i ( 0 ) 0 ! ( t 2 t 1 ) 0 + ϕ i ( 1 ) 1 ! ( t 2 t 1 ) 1 + ϕ i ( 2 ) 2 ! ( t 2 t 1 ) 2 +
thus, rearranging the math notation and trunking up to the first derivative,
ϕ i ϕ i + ϕ i ( 1 ) ( t 2 t 1 ) ,
dropping ϕ i ( 1 ) off as a state variable, the first-order derivative ( ϕ ( 1 ) ( t ) ϕ ˙ ( t ) ) is given by
ϕ ˙ ( t ) = ϕ 2 ϕ 1 t 2 t 1 ,
and assuming a vertebra’s angular measurement model in terms of the encoder’s pulses η with resolution R, then it is stated that by substituting the pulses encoder model into the angular speed function for the first vertebra,
ϕ ˙ 1 = 2 π R ( t 2 t 1 ) η 2 2 π R ( t 2 t 1 ) η 1 = 2 π R η 2 η 1 t 2 t 1 .
As for the second vertebra,
ϕ ˙ 2 = ϕ ˙ 1 + 2 π R η 2 η 1 t 2 t 1
ϕ ˙ 3 = ϕ ˙ 1 + ϕ ˙ 2 + 2 π R η 2 η 1 t 2 t 1
and
ϕ ˙ 4 = ϕ ˙ 1 + ϕ ˙ 2 + ϕ ˙ 3 + 2 π R η 2 η 1 t 2 t 1 .
The preliminary sensing models serve as a comprehensive representation, strategically integrated into the control models as crucial feedback terms. A detailed exploration of this integration is elucidated in Section 6 and Section 7.

4. Deep ANN-Based EMG Data Classification

This section details the experimental acquisition of EMG data, their spatial filtering, and pattern extraction achieved through the statistical combination of linear envelopes. Additionally, an adaptive method for class separation and data dispersion reduction is described. This section also covers the structure of a deep neural network, presenting its classification output results from mapping input EMG stimuli.
A related study reported a system for automatic pattern generation for neurosimulation in [34], where a neurointerface was used as a neuro-protocol for outputting finger deflection and nerve stimulation. In the present research, numerous experiments were carried out to pinpoint the optimal electrode placement and achieve precise electromyographic readings for each predefined movement in the experiment. The positions of the electrodes were systematically adjusted, and the results from each trial were compared. Upon data analysis, it was discerned that the most effective electrode placement is on the ulnar nerve, situated amidst the muscles flexor digitorium superficialis, flexor digitorium profundus, and flexor carpi ulnaris. A series of more than ten experiments was executed for each planned stimulus or action involving hands, allowing a 2s interval between each action, including the opening and closing of hands, as well as the extension and flexion of the thumb, index, middle, ring, and little fingers. The data were measured by a g.MOBIlab+ device with two-channel electrodes and quantified in microvolts per second [ μ v/s], as depicted in Figure 3.
The data acquired from the electromyogram often exhibit substantial noise, attributed to both the inherent nature of the signal and external vibrational factors. To refine the data quality by mitigating this noise, a filtering process is essential.
In this context, a second-order Notch filter was utilized. This filter is tailored to target specific frequencies linked to noise, proving particularly effective in eliminating electrical interferences and other forms of stationary noise [35]. A Notch filter is a band-rejection filter to greatly reduce interference caused by a specific frequency component or a narrow band signal. Hence, in the Laplace space, the second-order filter is represented by the analog Laplace domain transfer function:
H ( s ) = s 2 + ω o 2 s 2 + 2 s ξ ω o + ω o 2 ,
where ω 0 signifies the cut angular frequency targeted for elimination, and 2 ξ signifies the damping factor or filter quality, determining the bandwidth. Consequently, we solve to obtain its solution in the physical variable space. The bilinear transformation relates the variable s from the Laplace domain to the variable z k in the Z domain, considering T as the sampling period, and is defined as follows:
s 1 T z k 1 z k + 1 .
Upon substituting the previous expression into the transfer function of the analog Notch filter and algebraically simplifying, the following transfer function in the Z domain is obtained, redefining the notation as υ t = z k just to meet equivalence with the physical variable:
h ( υ t ) = 1 2 cos ( ω 0 t ) υ t 1 + υ t 2 1 2 ξ cos ( ω 0 t ) υ t 1 + ξ υ t 2 .
The second-order Notch filter h ( υ ) was employed on raw EMG data to alleviate noise resulting from electrical impedance and vibrational electrode interference, with parameters set at ω 0 = 256 Hz and ξ = 0.1 and results depicted in Figure 4.
Subsequently, while other studies explored time and frequency feature extraction, as seen in [36], in the present context, by utilizing the outcomes of the Notch filter, our data undergo processing through three distinct filters or linear envelopes. This serves as a secondary spatial filter and functions as a pattern extraction mechanism. These include a filter for average variability, one for linear variability, and another for average dispersion. Each filter serves a specific purpose, enabling the analysis of different aspects of the signal. Consider n as the number of measurements constituting a single experimental stimulus, and let N represent the entire sampled data space obtained from multiple measurements related to the same stimulus. Furthermore, denote v ^ i as the ith EMG measurement of an upper limb, measured in microvolts ( μ V ). From such statements, the following Propositions 1–3 are introduced as new data patterns.
Proposition 1
(Filter γ ). The γ pattern refers to a statistical linear envelope described by the difference of a local mean v ^ k in a window of samples and the statistical mean υ ^ i of all samples in an experiment.
γ ( v k ) = v ^ i 1 n k = 1 n v k .
Proposition 2
(Filter λ ). The λ ( v k ) pattern refers to a statistical linear envelope denoted by the difference of a local mean v ^ k in a window of samples and the statistical mean υ ^ i of the whole population of experiments of the same type,
λ ( v k ) = v ^ i 1 N k k = 1 N k v k ^ .
Proposition 3
(Filter Ω ). The Ω pattern refers to a statistical linear envelope denoted by the difference of statistical means between the population of one experiment v ^ i and the whole population of numerous experiments of the same type:
Ω ( v k ) = 1 n k = 1 n v k 1 N k = 1 N v k .
Hence, let the vector δ R 3 such that δ k = ( Ω k , γ k , λ k ) and represent filtered data points in the Ω γ λ -space. This study includes a brief data preprocessing as a method to improve the multi-class separability and data scattering reduction in pattern extraction. Three distinctive patterns— γ , λ , Ω —captivatingly converge in Figure 5. This illustration exclusively features patterns associated with sequences of muscular stimuli from both the right and left hands. For supplementary stimulus plots, refer to Appendix A at the end of this manuscript.
From numerous laboratory experiments, over 75 % of the sampled raw data fall within the range of one standard deviation. Consider the vector σ R 3 such that the standard deviation vector σ = ( σ Ω , σ γ , σ λ ) encompasses the three spatial components by its norm.
σ = 1 N k = 1 N Ω k γ k λ k μ γ μ λ μ Ω 2 2 .
Building upon the preceding statement, we can formulate an adaptive discrimination criterion, as elucidated in Definition 1.
Definition 1
(Discrimination condition). Consider the scalar value δ j as preprocessed EMG data located within a radius of magnitude κ d times the standard deviation σ :
δ j = δ k , κ d σ δ 0 , κ d σ > δ
where 0 = ( 0 , 0 , 0 ) represents discriminated data.
Hence, consider the recent Definition 1 in the current scenario with κ d = 1.0 , which serves as a tuning discrimination factor. Therefore, the norm l h represents the distance between the frame origin and any class in the Ω γ λ -space. This distance is adaptively calculated based on the statistics of each EMG class.
l h = ( κ Ω σ Ω ) 2 + ( κ γ σ γ ) 2 + ( κ λ σ λ ) 2 2
where the coefficients κ Ω , κ γ , and κ λ are smooth adjustment parameters to set separability along the axes. Hence, relocating each class center to a new position is stated by Proposition 4.
Proposition 4
(Class separability factor). A new class position μ Ω , γ , λ + in the Ω γ λ -space is established by the statistically adaptive linear relationship:
μ Ω + = μ Ω + ζ Ω l h ,
μ σ + = μ γ + ζ γ l h
and
μ λ + = μ λ + ζ λ l h ,
where ζ Ω γ λ are coarse in-space separability factors. The mean values μ Ω γ λ are the actual class positions obtained from the linear envelopes Ω ( υ k ) , γ ( υ k ) , and λ ( υ k ) .
Thus, by following the step-by-step method outlined earlier, Figure 6 showcases the extracted features of the EMG data, representing diverse experimental muscular stimuli. These results hold notable significance in the research, as they successfully achieve the desired class separability and data scattering, serving as crucial inputs for the multilayer ANN.
Henceforth, the focus lies on identifying and interpreting the EMG patterns projected in the γ λ Ω -space, as illustrated in Figure 6. The subsequent part of this section delves into the architecture and structure of the deep ANN employed as a classifier, providing a detailed account of the training process. Additionally, this section highlights the performance metrics and results achieved by the classifier, offering insights into its effectiveness. Despite the challenges posed by nonlinearity, multidimensionality, and extensive datasets, various neural network structures were configured and experimented with. These configurations involved exploring different combinations of hidden layers, neurons, and the number of outputs in the ANN.
A concise comparative analysis was performed among three distinct neural network architectures: the feedforward multilayer network, the convolutional network, and the competing self-organizing map. Despite notable configuration differences, efforts were made to maintain similar features, such as 100 training epochs, five hidden layers (except for the competing structure), and an equal number of input and output neurons (three input and four output neurons). The configuration parameters and results are presented in Table 2. This study delves deeper into the feedforward multilayer ANN due to its superior classification rate. Further implementations with enhanced features are planned in the C/C++ language as a compiled program.
To achieve the highest success rate in accurate data classification through experimentation, the final ANN was designed with perceptron units, as depited in Figure 7. It featured three inputs corresponding to the three EMG patterns γ , λ , Ω and included 12 hidden layers, each with 20 neurons. The supervised training process, conducted on a standard-capability computer, took approximately 20–30 min, resulting in nearly 1% error in pattern classification.
However, in the initial stages of the classification computations, with some mistuned adaptive parameters, the classification error was notably higher, even with much deeper ANN structures, such as 100 hidden layers with 99 neurons per layer. To facilitate the implementation, this work utilized the multilayer feedforward architecture increased to 300 epochs during training and implemented in C/C++, deploying the library Fast Artificial Neural Networks (FANN), generating extremely fast binary code once the ANN was trained. In the training process of this research, about 50 datasets from separate experiments for each type of muscular stimulus were collectively stored, each comprising several thousand muscular repetitions. A distinct classification label was assigned a priori for each class type within the pattern space. To demonstrate the reliability of the approach, 16 different stimuli per ANN were established for classification and recognition, resulting in the ANN having four combinatory outputs, each with two possible states. Figure 8 depicts mixed sequences encompassing all types of EMG stimuli, with the ANN achieving a 100% correct classification rate.
Moreover, Table 3 delineates the mapping relationship between the ANN’s input, represented by the EMG stimuli, and the ANN’s output linked to a swimming behavior for controlling the robotic avatar.

5. Fuzzy-Based Oscillation Pattern Generator

This section delineates the methodology utilized to produce electric oscillatory signals, essential for stimulating the inputs of electromagnetic devices (solenoids) embedded within the mechanized oscillator of the biorobotic fish. The outlined approach for generating periodic electric signals encompasses three key components: (a) the implementation of a fuzzy controller; (b) the incorporation of a set of periodic functions dictating angular oscillations to achieve the desired behaviors in the caudal undulation of the fish; and (c) the integration of a transformation model capable of adapting caudal oscillation patterns into step signals, facilitating the operation of the dual-coil electromagnetic oscillator.
Another study [37] reported a different approach, a neuro-fuzzy-topological biodynamical controller for muscular-like joint actuators. In the present research, an innovative strategy suggested for the fuzzy controller involves the combination of three distinct input fuzzy sets: the artificial neural network outputs transformed into crisp sets and the linear and angular velocities of the robot derived from sensor measurements. Simultaneously, the fuzzy outputs correspond to magnitudes representing the periods of time utilized to regulate the frequency and periodicity of the caudal oscillations. This comprehensive integration enables the fuzzy controller to effectively process both neural network-derived information and real-time sensor data, dynamically adjusting the temporal parameters that govern the fish’s undulatory motions.
The depiction of the outputs from the EMG pattern recognition neural network is outlined in Table 3. Each binary output in the table is linked to its respective crisp-type input fuzzy sets when represented in the decimal numerical base y C , as illustrated in Figure 9a. Moreover, Definition 2 details the parametric nature of the input sets.
Definition 2
(Input fuzzy sets). The output of the artificial neural network (ANN) corresponds to the fuzzy input, denoted y C , and only when it falls within the crisp set C = 0 , 1 , , 15 , the membership in the crisp set is referred to as μ y ( y C ) .
μ y ( y C ) = 0 , y C C 1 , y C C .
In relation to the sensor observations of the biorobot, the thrusting velocity v [cm/s] and angular velocity ω [rad/s] exhibit S-shaped sets modeled by sigmoid membership functions. This modeling approach is applied consistently to both types of input variables, capturing their extreme-sided characteristics. Let μ s , f ( v ) define the sets labeled “stop” and “fast” in relation to the thrusting velocity v, elucidated by
μ s , f ( v ) = 1 1 + e ± v a .
Likewise, for the sets designated “left–fast” ( l r ) and “right–fast” ( r f ) concerning the angular velocity ω, the S-shaped sets are modeled as
μ l f , r f ( ω ) = 1 1 + e ± v b .
In addition, the rest of the sets in between are any of the kth Gauss membership functions (“slow”, “normal”, and “agile”) for the robot’s thrusting velocity with parametric mean v ¯ and standard deviation σ v k ,
μ k ( v ) = e ( v ¯ v ) 2 2 σ v k 2 ,
and for its angular velocity (“left–slow”, “no turn”, and “right-slow”), with parametric mean ω ¯ and standard deviation σ ω k ,
μ k ( ω ) = e ( ω ¯ v ) 2 2 σ ω k 2 .
Therefore, the reasoning rules articulated in the inference engine have been devised by applying inputs derived from Table 3, specifically tailored to generate desired outputs that align with the oscillation periods T [s] of the fish’s undulation frequency (see Figure 10).
Definition 3
( v , ω = a n y ). For any linguistic value v representing sensor observations of the fish’s thrusting velocity,
v = a n y stop or slow or normal or agile or fast .
Likewise, for any linguistic value ω representing sensor observations of the fish’s angular velocity,
ω = a n y left-fast or left-slow or no turn or right-slow or right-fast .
Therefore, the following inference rules describe the essential robot fish swimming behavior.
  • if y C = sink and v = any or ω = any then T f , r , l =  too_slow
  • if y C = buoyant and v = any or ω = any then T f , r , l =  too_slow
  • if y C = gliding and v = any and ω = any then T f = slow, T r , l =  too_slow
  • if y C = slow_thrust and v = any and ω = any then T f = slow, T r , l =  too_slow
  • if y C = medium_thrust and v = any and ω = any then T f = normal, T r , l =  too_slow
  • if y C = fast_thrust and v = any and ω = any then T f = agile, T r , l =  too_slow
  • if y C = slow-right_maneuvering and v = any and ω = any then T f , l = too_slow, T r =  normal
  • if y C = medium-right_maneuvering and v = any and ω = any then T f = slow, T r = agile, T l =  too_slow
  • if y C = fast-right_maneuvering and v = any and ω = any then T f = normal, T r = fast, T l =  too_slow
  • if y C = slow-left_maneuvering and v = any and ω = any then T f , r = too_slow, T l = n o r m a l
  • if y C = medium-left_maneuvering and v = any and ω = any then T f = slow, T r = too_slow, T l =  agile
  • if y C = fast-left_maneuvering and v = any and ω = any then T f = normal, T r = too_slow, T l =  fast
  • if y C = speed-up_right-turn and v = any and ω = any then T f , l = too_slow, T r =  fast
  • if y C = speed-up_left-turn and v = any and ω = any then T f , r = too_slow, T l =  fast
  • if y C = slow-down_right-turn and v = any and ω = any then T f , l = too_slow, T r = s l o w
  • if y = slow-down_left-turn and v = any and ω = any then T f , r = too_slow, T l =  slow
Hence, the output fuzzy sets delineate the tail undulation speeds of the robot fish across the caudal oscillation period T [s], as illustrated in Figure 10. Notably, three identical output sets with distinct concurrent values correspond to the periods of three distinct periodic functions: forward (f), right (r), and left (l), as subsequently defined by Equations (24a), (24b), and (24c).
The inference engine’s rules dictate the application of fuzzy operators to assess the terms involved in fuzzy decision-making. As for the thrusting velocity fuzzy sets, where v = a n y was previously established and by applying Definition 3, the fuzzy operator is described by
μ v m a x = max μ k v μ s t o p , μ s l o w , μ n o r m a l , μ a g i l e , μ f a s t .
Likewise, the angular velocity fuzzy sets, where previously ω = a n y was stated, by applying the second part of Definition 3, the following fuzzy operator is described by
μ ω m a x = max μ k ω μ l e f t f a s t , μ l e f t s l o w , μ n o t u r n , μ r i g h t s l o w , μ r i g h t f a s t .
Therefore, according to the premise ( v = a n y or ω = a n y ), the following fuzzy operator applies:
μ v , ω m a x = max μ k v ω μ v m a x , μ ω m a x .
Essentially, the fuzzification process applies strictly similar for the rest of the inference rules, according to the following Proposition.
Proposition 5
(Combined rules μ i * ). The general fuzzy membership expression for the ith inference rule that combines multiple inference propositions is
μ i * = min μ y C v ω μ y i ( y C i ) = 1 , μ k m a x ( v , ω ) .
In any crisp set, each μ i attains a distinct value of 1, irrespective of the corresponding inference rule indexed by i. This value aligns with the ith entry in the neural network outputs outlined in Table 3. Additionally, k represents a specific fuzzy set associated with the same input.
Executing the previously mentioned proposition, Remark 1 provides a demonstration of its application.
Remark 1
(Proposition 5 example). Let us consider rule i = 1 , where y C 1 = sink and either ( v = 10.0 cm/s or ω = 0.0 rad/s). Thus, articulated in the context of the resulting fuzzy operator,
μ 1 * = min μ k y C v ω μ s i n k ( y C ) , max v ω μ v , ω ( 1 , 1 ) = min μ k y C v ω 1 , 1 = 1 .
Here, μ v ( 10.0 ) = 1 and μ ω ( 0.0 ) = 1 . Based on the earlier proposition, the resulting μ 1 * = 1 , and given that rule 1 indicates an output period T = “too-slow”, its inverse outcome T ( μ 1 * ) = 6.0 s. This outcome is entirely accurate, because the fish’s swim undulation will slow down up to 6 s, which is the slowest period oscillation.
Moving forward, during the defuzzification process, the primary objective is to attain an inverse solution. The three output categories for periods T include “forward”, “right”, and “left”, all sharing identical output fuzzy sets of T (Figure 10). Nevertheless, the output fuzzy sets consist of two categories of distributions: Gauss and sigmoid distribution sets, as outlined in Definition 4. Regarding the Gauss distributions, their functional form is specified by:
Definition 4
(Output fuzzy sets). The membership functions for both extreme-sided output sets are defined as “fast” with μ f and “too slow” with μ t s , such that
μ f , t s ( T ) = 1 1 + e ± T k c k .
Here, T [s] denotes the period of time for oscillatory functions, with the slope direction determined by its sign. The parameter c represents an offset, and k is the numerical index of a specific set.
Furthermore, the membership functions for three intermediate output sets are defined as “agile” with μ a , “normal” with μ n , and “slow” with μ s , such that:
μ a , n , s ( T ) = e ( T ¯ k T ) 2 2 σ T k 2 .
Here, T ¯ k represents the mean value, and σ T k denotes the standard deviation of set k, with k serving as the numeric index of a specific set.
In accordance with Definition 4, any μ k possesses a normalized membership outcome within the interval μ k [ 0 , , 1 ] . The inverse sigmoid membership function, denoted T k R μ k , is determined by the general inverse expression:
T ( μ k ) = ln 1 μ k ± c k .
Similarly, the inverse Gaussian membership function, where T k R μ k , is defined by the inverse function:
T ( μ k ) = μ k 2 σ k 2 ln ( μ k ) 2 .
Hence, exclusively for the jth category among the output fuzzy sets affected by the fuzzy inference rule essential for estimating the value of T, the centroid method is employed for defuzzification through the following expression:
T f , r , l = j μ j * T j ( μ j * ) j μ j * ,
or more specifically,
T f , r , l = μ f * T ( μ f * ) + μ a * T ( μ a * ) + μ n * T ( μ n * ) + μ s * T ( μ s * ) + μ t s * ( μ t s * ) μ f * + μ a * + μ n * + μ s * + μ t s * .
For terms T ( μ f ) and T ( μ t s ) , the inverse membership function (20) is applicable, whereas for the remaining sets in the jth category, the inverse membership (21) is applied.
Reference [38] reported a CPG model to control a robot fish’s motion in swimming and crawling and let it perform different motions influenced by sensory input from light, water, and touch sensors. The study of [39] reported a CPG controller with proprioceptive sensory feedback for an underactuated robot fish. Oscillators and central pattern generators (CPGs) are closely related concepts. Oscillators are mathematical or physical systems exhibiting periodic behavior and are characterized by the oscillation around a stable equilibrium point (limit cycle). In the context of CPGs, these are neural networks that utilize oscillators that create positive and negative feedback loops, allowing for self-sustaining oscillations and the generation of rhythmic patterns, particularly implemented in numerous robotic systems [40]. CPGs are neural networks found in the central nervous system of animals (e.g., fish swimming [41]) that generate rhythmic patterns of motor activity and are responsible for generating and coordinating optimized [42] repetitive movements.
The present research proposes a different approach from the basic CPG model, and as a difference from other wire-driven robot fish motion approaches [43], this study introduces three fundamental undulation functions: forward, right-turning, and left-turning. These functions are derived from empirical measurements of the robot’s caudal fin oscillation angles. However, a distinctive behavioral undulation swim is achieved by blending these three oscillation functions, each incorporating corresponding estimation magnitudes derived from the fuzzy controller outputs. The formulation of each function involves fitting empirical data through Fourier series. As a difference from other approaches on CPG parameter adjustment [44], the preceding fuzzy outputs obtained from (23) to estimate time periods T f , T r , T l play a pivotal role in parameterizing the time periods for the periodic oscillation functions, as outlined in Proposition 6.
Proposition 6
(Oscillation pattern function). Three fundamental caudal oscillation patterns, designed to generate swimming undulations, are introduced, each characterized by 11 pre-defined numerical coefficients. These patterns are described by amplitude functions, denoted ψ ( ϕ , T ) , where ϕ represents oscillation angles, and the time period T is an adjustable parameter.
The undulation pattern for forward motion is provided by the following function:
ψ f ( ϕ , T f ) = 0.0997 + 0.3327 cos ϕ 2 π T f 0.1297 sin ϕ 2 π T f 0.5760 cos 2 ϕ 2 π T f + 0.3701 sin 2 ϕ 2 π T f 0.1431 cos 3 ϕ 2 π T f + 0.1055 sin 3 ϕ 2 π T f 0.0870 cos 4 ϕ 2 π T f + 0.06323 sin 4 ϕ 2 π T f 0.0664 cos 5 ϕ 2 π T f 0.0664 sin 5 ϕ 2 π T f .
Likewise, the undulation pattern for right-turn motion is given by the function
ψ r ( ϕ , T r ) = 0.3324 + 0.1915 cos ϕ 2 π T f + 0.0622 sin ϕ 2 π T + 0.4019 cos 2 ϕ 2 π T + 0.2920 sin 2 ϕ 2 π T 0.264 cos 3 ϕ 2 π T 0.3634 sin 3 ϕ 2 π T 0.0459 cos 4 ϕ 2 π T 0.1413 sin 4 ϕ 2 π T + 0.0665 sin 5 ϕ 2 π T .
Finally, the undulation pattern for left-sided turning motion is established by the expression
ψ l ( ϕ , T l ) = 0.1994 + 0.125 cos ϕ 2 π T l 0.0622 sin ϕ 2 π T l + 0.3354 cos 2 ϕ 2 π T l 0.292 sin 2 ϕ 2 π T l 0.3305 cos 3 ϕ 2 π T l + 0.3634 sin 3 ϕ 2 π T l 0.1124 cos 4 ϕ 2 π T l + 0.1413 sin 4 ϕ 2 π T l 0.0664 cos ( 5 ϕ 2 π T l ) + 0.2659 sin ( 5 ϕ 2 π T l ) .
The approaches to forward, right-turn, and left-turn based on the findings of Proposition 6 are illustrated in Figure 11. Additionally, a novel combined oscillation pattern emerges by blending these three patterns (25), each assigned distinct numerical weights through the neuro-fuzzy controller.
ψ ( ϕ , T f , T r , T l ) = ψ f ( ϕ , T f ) + ψ r ( ϕ , T r ) + ψ l ( ϕ , T l ) .
The proposed robotic mechanism features a multilink-based propulsive spine driven by an electromagnetic oscillator composed of a pair of antagonistic solenoids that necessitate a synchronized sequence of electric pulses (see Figure 12b). The amplitudes generated by ψ ( ϕ , T f , T r , T l ) in Equation (25) essentially represent the desired undulation pattern for the robotic fish’s caudal fin. However, these oscillations are not directly suitable for the inputs of the coils. To address this, our work introduces a decomposition of ψ into two step signals centered around a stable equilibrium point (limit cycle), one for the right coil (positive with respect to the limit cycle) and another for the left coil (negative with respect to the limit cycle). The coil’s step function, for either the right-sided or left-sided coil, is given by s r , l , taking the equilibrium point as their limit value ξ :
s r , l = 0 , ψ ξ 1 , ψ > ξ
The Boolean [ 0 , 1 ] values derived from (26) are illustrated in Figure 11a–c as sequences of step signals for both the right (R) and left (L) solenoids of the robot. Each plot maintains a uniform time scale, ensuring vertical alignment for clarity. In contrast to the work presented in [45] focusing on the swimming modes and gait transition of a robotic fish, the current study, as depicted in Figure 11, introduces a distinctive context. The three oscillatory functions, ψ f , r , l , are displayed both overlapped and separated, highlighting their unique decoupled step signals. Assuming ξ = 0 for all ϕ in each case, in Figure 11a, ψ r = ψ l 0 , with T r , l 6 ; in Figure 11b, ψ f = ψ l 0 , with T f , l 6 ; and in Figure 11c, ψ f = ψ r 0 , with T f , r 6 . For a more comprehensive understanding, Figure 12b presents the electromechanical components of the caudal motion oscillator.

6. Robot Fish Biomechanical Model

This section introduces the design of the robotic fish mechanism and explores the model of the underactuated physical system to illustrate the fish undulation motions. The conceptualization of the proposed system is inspired by an underactuated structure featuring a links-based caudal spine with passive joints, utilizing helical springs to facilitate undulatory locomotion (see Figure 12a). The robotic fish structure introduces a mechanical oscillator comprising a pair of solenoids activated through coordinated sequences of step signals, as described by (26). Essentially, the electromagnetic coils generate antagonistic attraction/repulsion linear motions, translating into rhythmic oscillations within a mechanized four-bar linkage (depicted in Figure 12b). This linkage takes on the form of a trapezoid composed of two parallel rigid links and two lateral linear springs functioning as antagonistic artificial muscles. Moreover, beneath the electromagnetic oscillator, there is a ballasting device for either submersion or buoyancy (Figure 12c). The robot’s fixed reference system consists of the X axis, which intersects the lateral sides, and the Y axis aligned with the robot’s longitudinal axis.
In Figure 12a, the electromagnetic oscillator of the robotic avatar responds to opposing coordinated sequences of step signals. The right-sided (R) and left-sided (L) solenoids counteract each other’s oscillations, generating angular moments in the trapezoid linkage (first vertebra). Both solenoids are identical, each comprising a coil and a cylindrical neodymium magnet nucleus. The trapezoid linkage, depicted in Figure 12b, experiences magnetic forces ± f o s R L at the two neodymium magnet attachments situated at a radius of r o s , resulting in two torques, τ o s and τ s , with respect to their respective rotation centers. As input forces ± f o s R L come into play, the linear muscle in its elongated state stores energy. Upon restitution contraction, this stored energy propels the rotation of the link r s , which constitutes the first vertebra of the fish.
Furthermore, the caudal musculoskeletal structure, comprising four links ( 1 , 2 , 3 , 4 ) and three passive joints ( θ 1 , θ 2 , θ 3 ), facilitates a sequential rotary motion transmitted from link 1 to link 4. This transmission is accompanied by an incremental storage of energy in each helical spring that is serially connected. Consequently, the last link (link 4) undulates with significantly greater mechanical advantage. In summary, a single electrical pulse in any coil is sufficient to induce a pronounced undulation in the swimming motion of the robot’s skeleton.
As for the ballasting control device situated beneath the floor of the electromechanical oscillator, activation occurs only when either of two possible outputs from the artificial neural network (ANN) is detected: when y C equals “sink” or “buoyancy”. However, the fuzzy nature of these inputs results in a gradual slowing down of the fish’s undulation to its minimum speed. Additionally, both actions are independently regulated by a dedicated feedback controller overseeing the ballasting device.
Now, assuming knowledge of the input train of electrical step signals s r , l applied to the coils, let us derive the dynamic model of the biorobot, starting from the electromagnetic oscillator and extending to the motion transmitted to the last caudal link. Thus, as illustrated in Figure 12a,b, the force f [N] of the solenoid’s magnetic field oscillator is established on either side (right, denoted R, or left, denoted L),
f = B 2 A 2 μ o .
In this context, A [m2] represents the area of the solenoid’s pole. The symbol μ o denotes the magnetic permeability of air, expressed as μ o = 4 π × 10 7 H/m (henries per meter). Hence, the magnetic field B (measured in Teslas) at one extreme of a solenoid is approximated by:
B = μ o i N l ,
where i represents the coil current [A], N is the number of wire turns in a coil, and l denotes the coil length [m]. Furthermore, a coil’s current is described by the following linear differential equation as a function of time t (in seconds), taking into account a potential difference v (in volts):
i = 1 L 0 T v d t + i 0 .
Here, L represents the coil’s inductance (measured in henries, H) with an initial current condition denoted i 0 . Additionally, the coil’s induction model is formulated by:
L = μ o N 2 A l .
In essence, this study states that both lateral solenoids exhibit linear motion characterized by an oscillator force f o s . This force is expressed as:
f o s = i 2 A l 2 μ o N 2 A
and due to the linear impacts of solenoids on both sides R and L (refer to Figure 12a), the first bar of the oscillator mechanism generates a torque, expressed as:
τ o s = ( f o s ) ( r o s ) .
It is theorized that the restitution/elongation force along the muscle is denoted f m (R or L), with this force being transmitted from the electromechanical oscillator to the antagonistic muscle in the opposite direction (refer to Figure 12b). This implies that the force generated from the linear motion solenoid in the oscillator’s right-sided coil, denoted f o s R , is applied at point R and subsequently reflected towards point L with an opposite direction, represented as f o s L . Similarly, conversely from the oscillator’s left-sided solenoid, the force f o s L is applied at point L and transmitted to point R as f o s L . For f o s L applied in L,
f m R = f o s L cos ( α R ) ; f m L = f o s R sin ( α L ) .
Hence, the angles α R , L assume significance as the forces acting along the muscles f m differ, resulting in distinct instant elongations x m ( t ) . Consequently, the four-bar trapezoid-shaped oscillator mechanism manifests diverse inner angles, namely, θ 1 , 2 , β , and γ 1 , 2 , as illustrated in Figure 12b.
Thus, prior to deriving an analytical solution for α , it is imperative to formulate a theoretical model for the muscle. In this study, a Hill’s model is adopted, as depicted in Figure 12a (on the right side). The model incorporates a serial element SE (overdamped), a contractile element CE (critically damped), and a parallel element PE (critically damped), each representing distinct spring-mass-damper systems.
The generalized model for the antagonistic muscle is conceptualized in terms of the restitution force, and it is expressed as:
f m = f S E ( f C E + f P E ) .
Therefore, by postulating an equivalent restitution/elongation mass m w associated with instantaneous weight-force loads w (such as due to hydrodynamic flows), the preceding model is replaced with Newton’s second law of motion,
f m = m w x ¨ S E m w x ¨ C E m w x ¨ P E .
Furthermore, through the independent solution of each element within the system in terms of elongations, the SE model can be expressed as:
x S E ( t ) = s 1 e λ 1 t + s 2 e λ 2 t .
Here, s 1 , 2 represent arbitrary constants representing the damping amplitude. The terms λ 1 , 2 denote the root factors,
λ 1 , 2 = c S E m w ± ( c S E m w ) 2 4 k S E m w 2 2 .
Here, the factors λ 1 , 2 are expressed in relation to the damping coefficient c S E (in kg/s) and the elasticity coefficient k S E (in kg/s2).
Similarly, for the contractile element CE, its elongation is determined by:
x C E ( t ) = ( c 1 + c 2 t ) e c C E m w t .
With amplitude factors c 1 , 2 and damping coefficient c C E , a similar expression is obtained for the parallel element PE:
x P E ( t ) = ( p 1 + p 2 t ) e c P E m w t .
With amplitude factors p 1 , 2 and damping coefficient c P E , the next step involves substituting these functional forms into the general muscle model,
f m ( t ) = m w d 2 d t 2 x S E ( t ) + d 2 d t 2 x C E ( t ) + d 2 d t 2 x P E ( t )
such that the complete muscle’s force model f m is formulated by
f m ( t ) = s 1 λ 1 2 m w e λ 1 t + s 2 λ 2 2 m w e λ 2 t + c 1 c C E 2 m w e c m w t c 2 c C E e c m w t c 2 c C E e c m w t + c 2 c C E 2 m w t e c m t + p 1 c P E 2 m w e c m w t p 2 c P E e c m w t p 2 c P E e c m w t + p 2 ( c P E 2 m w ) t e c m w t .
Subsequently, simplifying the preceding expression leads to the formulation presented in Proposition 7.
Proposition 7
(Muscle force model). The solution to the muscle force model, based on a Hill’s approach, is derived as a time-dependent function f m ( t ) encompassing its three constituent elements (serial, contractile, and parallel). This formulation is expressed as:
f m ( t ) = s 1 λ 1 2 e λ 1 t + s 2 λ 2 2 e λ 2 t m w + c 1 + c 2 t m w c C E 2 c 2 c C E e c C E m w t + p 1 + p 2 t m w c P E 2 p 2 c P E e c P E m w t .
Thus, without loss of generality, considering a muscle model characterized by elongation x m and a force-based model f m , we proceed to derive the passive angles of the oscillator and the output forces f x and f y for 1 .
Under initial conditions, the trapezoid oscillator bars are assumed to have θ 0 = 0 o , aligning the four-bar mechanism with the X axis. As the bars rotate by an angle θ 1 due to solenoid impacts at points R or L, the input bar of the oscillator with a radius of r o s undergoes an arc displacement s 1 . Simultaneously, the output bar of shorter radius r s experiences a displacement rate of s 2 , such that:
s 2 = r S r o s s 1 .
The arc displacement at point R or L is given by s 1 = r o s θ o s . Consequently, the rotation angle of the input oscillator is expressed as:
θ o s = s 1 r o s .
Therefore, by formulating this relationship in the context of forces and subsequently substituting the newly introduced functions, the resulting expression is
θ o s = 1 r o s t y ¨ d t 2 .
Here, y ¨ denotes the linear acceleration of either point R or L along the robot’s Y axis. By replacing the solenoid’s mass-force formulation,
θ o s = 1 m r o s t f o s d t 2 = f t 2 2 m r o s .
Hence, the functional expression for s 1 takes the form
s 2 = r s f o s t 2 2 m r o s 2 .
Without loss of generality, the inner angle θ 1 of the oscillator mechanism (refer to Figure 12b) is derived as:
θ 1 = π 2 ± θ o s .
Initially, when the oscillator bars are aligned with respect to the X axis, an angular displacement denoted by θ 1 occurs as a result of the transfer of motion from the solenoid’s tangential linear motion to the input bar. Similarly, in the output bar, the corresponding angular displacement is represented by θ 2 ,
θ 2 = π ± ( θ + Δ θ ) .
Here, Δ θ signifies a minute variation resulting from motion perturbation along the various links of the caudal spine. The selection of the ± operator depends on the robot’s side, whether it is denoted R or L. As part of the analysis strategy, the four-bar oscillator was geometrically simplified to half a trapezoid for the purpose of streamlining deductions (refer to Figure 12b). Within this reduced mechanism, two triangles emerge. One triangle is defined by the parameters r o s , , d, while the other is characterized by x m ( t ) , , r s , where serves as the hypotenuse and the sides d, r o s , and r s remain constant. Consequently, the instantaneous length of the hypotenuse is deduced as follows:
2 = r o s 2 + d 2 2 d r o s cos ( θ 1 ) .
Upon determining the value of , the inner angle γ 1 can be derived as follows:
r o s 2 = d 2 + 2 2 d cos ( γ 1 ) .
Therefore, by isolating γ 1 ,
γ 1 = arccos r o s 2 d 2 2 d d .
Until this point, given the knowledge of γ 1 and θ 2 , it is feasible to determine the inner complementary angle γ 2 through the following process:
γ 2 = θ 2 γ 1 .
Subsequently, the angle formed by the artificial muscle and the output bar can be established according to the following principle:
sin ( γ 2 ) x m = sin ( β ) .
Thus, the inner angle β is
β = arcsin x m sin ( γ 2 ) ,
or, alternatively, an approximation of the muscle length is
x m = sin ( γ 2 ) sin β .
This is the mechanism through which the input bar transmits a force f m 1 , as defined in expression (33), from the tangent f o s to the output bar, achieving a mechanical advantage denoted f m 2 ,
f m 2 = r o s r s f m 1 .
Hence, in accordance with the earlier stipulation in expression (33), Definition 5 delineates the instantaneous angles α R , L .
Definition 5
(Angles α R , L ). The instantaneous angle α, expressed as a function of the inner angles of the oscillator, is introduced by:
α R , L = β R , L θ 1 R , L .
It is noteworthy that, owing to the inertial system of the robot, the longitudinal force output component f y aligns with the input force f o s in direction. Consequently, for a right-sided force, we have α R β R θ 1 R , where:
f x R = r o s r s f o s L sin ( α R ) cos ( α R )
and
f y R = r o s r s f o s R .
Likewise, for the left-sided α L β L θ 1 L ,
f x L = r o s r s f o s R sin ( α L ) cos ( α L )
as well as
f y L = r o s r s f o s L .
In this scenario, an inverse solution is only applicable for f x R , L , with no necessity for determining f y R , L . Consequently, the mechanical advantage transferred between the input and output bars can be expressed by a simplified coefficient
κ r o s r s .
Furthermore, through the utilization of the following trigonometric identity,
sin ( β θ 1 ) cos ( β θ 1 ) tan ( β θ 1 )
can substitute and streamline the ensuing system of nonlinear equations by solving them simultaneously. Additionally, let θ 1 be defined as:
θ 1 R , L = π 2 ± r s f o s R , L t 2 2 m r o s 2 .
Hence, the simultaneous nonlinear system is explicitly presented solely for the force components along the X axis:
f x R = κ f o s L tan ( β R θ 1 R )
and
f x L = κ f o s R tan ( β L θ 1 L ) .
Therefore, for the numerical solution of the system, a multidimensional Newton–Raphson approach is employed as outlined in the provided solution:
β R t + 1 = β R t f x R f x L β L f x L f x L β L f x R β R f x L β L f x R β L f x L β R
and
β L t + 1 = β L t f x L f x R β R f x R f x L β R f x R β R f x L β L f x R β L f x L β R .
Thus, by defining all derivative terms to finalize the system,
f x R β R = κ f o s R θ 1 R cos 2 ( β R θ 1 R ) ,
f x R β L = 0 ,
f x L β R = 0 ,
f x L β L = κ f o s L θ 1 L cos 2 ( β L θ 1 L ) .
Therefore, by subsequently organizing and algebraically simplifying,
β R t + 1 = β R t f x R f x L β L f x R β R f x L β L = β R t + tan ( β R θ 1 R ) cos 2 ( β R θ 1 R ) θ 1 R
and
β L t + 1 = β L t f x L f x R β R f x R β R f x L β L = β L t + tan ( β L θ 1 L ) cos 2 ( β L θ 1 L ) θ 1 L .
The objective is to achieve numerical proximity, aiming for β t + 1 β t . Consequently, through this inverse solution, the lateral force components of the first spinal link, denoted by f x , are intended to be estimated because they are perpendicular to the links and produce the angular moments at each passive joint.
f x = f x R f x L = κ · f o s L tan β R t + 1 π 2 + r s f o s L t 2 2 m r o s 2 f o s R tan β L t + 1 π 2 r s f o s R t 2 2 m r o s 2 .
Thus, given that the torque of the trapezoid’s second bar is τ s = f s r s (see Figure 12b), we establish a torque–angular moment equivalence, denoted τ s M 1 . Leveraging this equivalence and the prior understanding of the torque τ s acting on the second bar of the trapezoid, mechanically connected to the first link 1 , we affirm their shared angular moment. Consequently, the general expression for the tangential force f k applied at the end of each link k is:
f k = M k k .
Yet, considering the angular moment M k for each helical-spring joint, supporting the mass of the successive links, let us introduce equivalent inertial moments, starting with I ε 1 = I 1 + I 2 + I 3 + I 4 . Subsequently, we define I ε 2 = I 2 + I 3 + I 4 , I ε 3 = I 3 + I 4 , and finally, I 4 . Thus, in the continuum of the caudal spine, the transmission of energy to each link is contingent upon the preceding joints, as established by
M 1 = I ε 1 θ 1 ¨ , M 2 = I ε 2 θ 2 ¨ , M 3 = I ε 3 θ 3 ¨ , M 4 = I 4 θ 4 ¨ .
Each helical spring, connecting pairs of vertebrae, undergoes an input force f = k x , directly proportional to the angular spring deformation indicated by elongation x. Here, k [kg m2/s2] represents the stiffness coefficient. External forces result in an angular moment, given by τ = k θ , where torque serves as an equivalent variable to angular momentum, such that I α = k θ . Consequently, when expressing the formula as a linear second-order differential equation, we have:
θ ¨ + k I θ = θ ¨ L k .
Here, θ ¨ L k represents undulatory accelerations arising from external loads or residual motions along the successive caudal links, which are detectable through encoders and IMUs. Assuming an angular frequency ω 2 = k / I , a period p = 2 π I / k , and moments of inertia expressed as I k = r k 2 m k , the general equation is formulated as follows:
M k = I ε k θ ¨ k ,
where θ ¨ k is replaced by the helical spring expression (71) to derive
M k = I ε k θ ¨ L k k k I ε k .
By algebraically extending, omitting terms, and rearranging for all links in the caudal spine, we arrive at the following matrix-form equation:
M 1 M 2 M 3 M 4 = I ε 1 0 0 0 0 I ε 2 0 0 0 0 I ε 3 0 0 0 0 I ε 4 · θ ¨ L 1 θ ¨ L 2 θ ¨ L 3 θ ¨ L 4 k 1 θ 1 k 2 θ 2 k 3 θ 3 k 4 θ 4 .
Hence, in accordance with Expression (69), the tangential forces exerted on all the caudal links of the robotic fish are delineated by the following expression:
f l 1 f l 2 f l 3 f l 4 = I ε 1 1 0 0 0 0 I ε 2 2 0 0 0 0 I ε 3 3 0 0 0 0 I ε 4 4 · θ ¨ L 1 θ ¨ L 2 θ ¨ L 3 θ ¨ L 4 k 1 1 0 0 0 0 k 2 2 0 0 0 0 k 3 3 0 0 0 0 k 4 4 · θ 1 θ 2 θ 3 θ 4
Alternatively, the last expression can be denoted as the following control law:
f = M θ ¨ L Q θ t M ( θ ˙ t t 1 θ ˙ t t 1 ) t 2 t 1 Q θ t ,
where f = ( f l 1 , f l 2 , f l 3 , f l 4 ) , M represents mass dispersion, and θ ¨ = ( θ ¨ L 1 , θ ¨ L 2 , θ ¨ L 3 , θ ¨ L 4 ) denotes the vector of angular accelerations for the caudal vertebrae, including external loads. Additionally, Q stands for the matrix of stiffness coefficients. Therefore, the inverse dynamics control law, presented in a recursive form, is:
θ ˙ L t + 1 = θ ˙ L t + M 1 t 2 t 1 f + Q θ t .
Finally, for feedback control, both equations are simultaneously employed within a computational recursive scheme, and angular observations are frequently derived from sensors on both joints: encoders and IMUs.

7. Ballasting Control System

This section delineates the integration of the ballasting control system, crafted to complement the primary structure of the biorobot. It introduces the ballasting model-based control system, selectively activated in response to the artificial neural network’s (ANN) output, particularly triggered when the ANN signals “sink” or “buoyancy”. Figure 13a visually depicts the biorobot’s ballasting system, while Figure 13b presents a diagram illustrating the fundamental components of the hydraulic piston, crucial for control modeling.
The core operational functions of the ballasting device involve either filling its container chamber with water to achieve submergence or expelling water from the container to attain buoyancy. Both actions entail the application of a linear force for manipulating a plunger or hydraulic cylindrical piston, thereby controlling water flow through either suction or exertion. Consequently, the volume of the liquid mass fluctuates over time, contingent upon a control reference or desired level marked H, along with quantifying a filling rate u ( t ) and measuring the actual liquid level h ( t ) .
Hence, we can characterize the filling rate u ( t ) as the change in volume V with respect to time, expressed as
d V ( t ) d t = u ( t ) ,
and assuming a cylindrical plunger chamber with radius r and area A = π r 2 , the volume is expressed as
V ( t ) = A h ( t ) .
Here, h ( t ) represents the actual position of the plunger due to the incoming hydraulic mass volume. Consequently, the filling rate can also be expressed as
u ( t ) = k ( H h ( t ) ) .
Consider k as an adjustment coefficient, and let H be the reference or desired filling level. The instantaneous longitudinal filling level is denoted h ( t ) . By substituting the previous expressions into the initial Equation (78), we derive the following first-order linear differential equation:
A d h ( t ) d t = k ( H h ( t ) ) .
To solve the aforementioned equation, we employ the integrating factor method, such that
h ˙ ( t ) + k A h ( t ) = k A H .
In this instance, the integrating factor is determined as follows:
μ ( t ) = e k A d t = e k t A .
Thus, by applying the integrating factor, we effectively reduce the order of derivatives in the subsequent steps,
e k t A h ˙ ( t ) + e k t A k A h ( t ) = k A H e k t A .
Through algebraic simplification of the left side of the aforementioned expression, the following result is determined:
h ( t ) e k t A = k A H e k t A .
Following this, by integrating both sides of the equation with respect to time,
t h ( t ) e k t A d t = t k A H e k t A d t ,
where the expression on the left side undergoes a transformation into
h ( t ) e k t A = k A H t e k t A d t ,
and the right side of the equation, once solved, transforms into
h ( t ) e k t A = H e k t A + c .
Now, to obtain the solution for h ( t ) , it is isolated by rearranging the term e k t A :
h ( t ) = H + c e k t A
For initial conditions where h ( t 0 ) = 0 indicates the plunger is completely inside the contained chamber at the initial time t 0 = 0 s, the integration constant c is determined as
0 = H + c e 0 .
Therefore, the value of c takes on c = H , and substituting it into the previously obtained solution,
h ( t ) = H ( 1 e k t A ) .
In addition, considering that the required force of the piston f e is hence given by:
f e = m ( t ) d v d t + f k + ρ a A ,
where f k is the friction force of the piston in the cylindrical piston, and ρ a A refers to the water pressure at that depth over the piston’s entry area. The instantaneous mass considers the piston’s mass m e and the liquid mass of the incoming water m a :
m ( t ) = m e + m a ,
where the water density is δ a = m a V a and V a = π r 2 h ( t ) , thus completing the mass model:
m ( t ) = m e + δ a π r 2 H ( 1 e k t A ) .
Therefore, the force required to pull/push the plunge device is stated by the control law, given as
f e = m e + δ a π r 2 H ( 1 e k t A ) d v d t + f k + ρ a A .
Finally, two sensory aspects were considered for feedback in the ballast system control and depth estimation. Firstly, the ballast piston is displaced by a worm screw with an advance parameter L p [m] and a motor, whose sensory measurement is taken at motor rotations ϕ p by a pulses η ^ t encoder of angular resolution R, where ϕ p = 2 π R η t . This allows for the instantaneous position of the piston to be measured by the observation h ^ t ,
h ^ t = L p ϕ p = 2 L p π R η ^ t .
Secondly, the robot’s aquatic depth estimation, denoted as y t , is acquired using a pressure sensor as previously described in Figure 2. This involves measuring the variation in pressure according to Pascal’s principle between two consecutive measurements taken at different times.
f t f t 1 m f g = 0 ,
where the instantaneous fluid force at a certain depth is expressed in terms of the measured pressure ρ ^ . Additionally, the area of the fish body subjected to fluid force f t , and robot of mass m f and averaged area A f ,
f t = ρ ^ a A f .
Assuming an ideal fluid, and substituting the fluid forces in terms of measurable pressures, as well as expressing the fish mass in terms of its density δ f and volume V f , which describes the weight of the robot’s body. It follows
ρ 2 A f ρ 1 A f δ a V f g = 0 ,
where y 1 and y 2 represent the surface depth and the actual depth of the robot, respectively, such that y = y 2 y 1 denotes the total depth. Therefore,
A f ( ρ 2 ρ 1 δ a y g ) = 0 ,
and by determining the pressure in the liquid as a function of depth, we select the level of the liquid’s free surface such that ρ 1 ρ A , where ρ A represents the atmospheric pressure, resulting in
ρ ^ ρ A δ f y g = 0 .
Establishing ρ ^ ρ 2 as the water pressure sensor measurement. Thus, the feedback robot fish depth estimation is given by
y ( ρ ^ ) = ρ ^ ρ A δ a g .

8. Conclusions and Future Work

In summary, this study introduces a cybernetic control approach integrating electromyography, haptic feedback, and an underactuated biorobotic fish avatar. Human operators control the fish avatar using their muscular stimuli, eliminating the need for handheld apparatus. The incorporation of fuzzy control, combining EMG stimuli with motion sensor observations, has proven highly versatile in influencing the decision-making process governing the fish’s swimming behavior. The implementation of a deep neural network achieved remarkable accuracy, surpassing 99.02 % , in recognizing sixteen distinct electromyographic gestures. This underscores the system’s robustness, effectively translating human intentions into precise control commands for the underactuated robotic fish.
The adoption of robotic fish technologies as human avatars in deep-sea exploration offers significant benefits and implications. Robotic fish avatars enable more extensive and efficient exploration of hazardous or inaccessible deep-sea environments, leading to groundbreaking discoveries in marine biology, geology, and other fields. By replacing human divers, robotic avatars mitigate risks associated with deep-sea exploration, ensuring the safety of researchers by avoiding decompression sickness, extreme pressure, and physical danger. Robotic avatar technology presents a more cost-effective alternative to manned missions, eliminating the need for specialized life support systems, extensive training, and logistical support for human divers. Robotic avatars facilitate continuous monitoring of deep-sea ecosystems, collecting data over extended periods without being limited by human endurance or logistical constraints. Minimizing human presence in the deep sea reduces environmental disturbance and the risk of contamination or damage to fragile ecosystems, preserving them for future study and conservation efforts. The development of robotic fish technologies drives innovation in robotics, artificial intelligence, and sensor technologies, with potential applications extending beyond marine science into various industries. Overall, while robotic fish avatars offer numerous benefits for deep-sea exploration and research, their deployment should be carefully managed to maximize scientific advancement while minimizing potential negative consequences.
While the adoption of robotic fish avatars holds significant promise for deep-sea exploration, several limitations and areas for future research must be addressed. Developing robotic fish avatars capable of accurately mimicking the behaviors of real fish in diverse deep-sea conditions remains a considerable challenge, necessitating improvements in propulsion, maneuverability, and energy efficiency. Enhancing their sensory capabilities to detect environmental stimuli effectively, alongside improving communication systems for real-time data transmission, is crucial for efficient exploration and navigation. Additionally, these avatars must adapt to the harsh deep-sea conditions, requiring research into materials and components that withstand extreme pressures, low temperatures, and limited visibility while maintaining functionality. Ensuring their long-term reliability and durability through maintenance strategies and robust designs is essential for sustained exploration missions. Integrating robotic fish avatars with emerging technologies like artificial intelligence and advanced sensors could further enhance their effectiveness. Addressing these challenges through continued research is vital for realizing the full potential of robotic fish avatars in deep-sea exploration.
This manuscript presents the outcomes of experimental EMG data classification and recognition achieved through a multilayered artificial neural network. These results signify a significant advancement in the field of robotic control, as they demonstrate the successful classification and recognition of EMG data, paving the way for enhanced control strategies in robotics. Utilizing an oscillation pattern generator, real signals were supplied to an experimental prototype of an underactuated robotic avatar fish equipped with an electromagnetic oscillator. This successful integration showcases the feasibility of incorporating real-time EMG data into robotic control systems, enabling more dynamic and responsive behavior. Additionally, validation of the fuzzy controller and the fish’s dynamical control model was conducted via computer simulations, providing further evidence of the effectiveness and reliability of the proposed control architecture. While the introduction of haptic feedback and interface remains conceptual within the proposed architecture, it signifies a promising direction for future research, aiming to augment remote operation with immersive experiences. The advancements demonstrated in this study hold substantial potential for future applications in underwater exploration, offering immersive cybernetic control capabilities that could revolutionize the field. The future trajectory of this research endeavors to enhance telepresence and avatar functionalities by integrating electroencephalography signals from the human brain. This integration will harness more sophisticated deep learning artificial neural network structures to achieve superior signal recognition. This advancement promises to unlock new levels of immersive interaction and control, paving the way for transformative applications in fields such as virtual reality, human-robot interaction, and neurotechnology.

Author Contributions

Conceptualization, E.A.M.-G.; project administration, E.A.M.-G.; supervision, E.A.M.-G. and R.T.-C.; writing original draft preparation, E.A.M.-G.; data curation, M.A.M.M.; investigation, M.A.M.M. and E.A.M.-G.; methodology, M.A.M.M., R.T.-C. and E.A.M.-G.; software, M.A.M.M.; formal analysis, M.A.M.M. and E.A.M.-G.; writing—review and editing, E.A.M.-G. and E.M.; validation, E.A.M.-G. and E.M.; visualization, M.A.M.M., E.A.M.-G. and E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received partial funding through scholarship grant number 2022-000018-02NACF, awarded to CVU 1237846 by the Consejo Nacional de Humanidades, Ciencias y Tecnologías (CONAHCYT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset for this research, along with a supporting experimental video, has been made accessible on Zenodo. The files were uploaded on 12 January 2024 at https://doi.org/10.5281/zenodo.10477143.

Acknowledgments

The corresponding author acknowledges the support of Laboratorio de Robótica. The third author acknowledge the support of the Kazan Federal University Strategic Academic Leadership Program (“PRIORITY-2030”).

Conflicts of Interest

The corresponding author asserts that they served as the guest editor for the Special Issue of the journal, emphasizing their lack of influence on the blind peer-review process or the final decision on the manuscript. Additionally, the authors declare that there are no conflicts of interest.

Appendix A. EMG Stimuli Patterns

Figure A1. Components of thumb pattern space: filters γ , λ , and Ω . (a) Filters applied to the left thumb. (b) Filters applied to the right thumb.
Figure A1. Components of thumb pattern space: filters γ , λ , and Ω . (a) Filters applied to the left thumb. (b) Filters applied to the right thumb.
Machines 12 00124 g0a1aMachines 12 00124 g0a1b
Figure A2. Components of index finger pattern space: filters γ , λ , and Ω . (a) Filters applied to the left index finger. (b) Filters applied to the right index finger.
Figure A2. Components of index finger pattern space: filters γ , λ , and Ω . (a) Filters applied to the left index finger. (b) Filters applied to the right index finger.
Machines 12 00124 g0a2aMachines 12 00124 g0a2b
Figure A3. Components of middle finger pattern space: filters γ , λ , and Ω . (a) Filters applied to the left middle finger. (b) Filters applied to the right middle finger.
Figure A3. Components of middle finger pattern space: filters γ , λ , and Ω . (a) Filters applied to the left middle finger. (b) Filters applied to the right middle finger.
Machines 12 00124 g0a3aMachines 12 00124 g0a3b
Figure A4. Components of ring finger pattern space: filters γ , λ , and Ω . (a) Filters applied to the left ring finger. (b) Filters applied to the right ring finger.
Figure A4. Components of ring finger pattern space: filters γ , λ , and Ω . (a) Filters applied to the left ring finger. (b) Filters applied to the right ring finger.
Machines 12 00124 g0a4aMachines 12 00124 g0a4b
Figure A5. Components of little finger pattern space: filters γ , λ , and Ω . (a) Filters applied to the left little finger. (b) Filters applied to the right little finger.
Figure A5. Components of little finger pattern space: filters γ , λ , and Ω . (a) Filters applied to the left little finger. (b) Filters applied to the right little finger.
Machines 12 00124 g0a5aMachines 12 00124 g0a5b

References

  1. Pan, Y.; Steed, A. A Comparison of Avatar-, Video-, and Robot-Mediated Interaction on Users’ Trust in Expertise. Front. Robot. AI 2016, 6, 12. [Google Scholar] [CrossRef]
  2. Tanaka, K.; Nakanishi, H.; Ishiguro, H. Comparing Video, Avatar, and Robot Mediated Communication: Pros and Cons of Embodiment. In Collaboration Technologies and Social Computing. CollabTech; Yuizono, T., Zurita, G., Baloian, N., Inoue, T., Ogata, H., Eds.; Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  3. Khatib, O.; Yeh, X.; Brantner, G.; Soe, B.; Kim, B.; Ganguly, S.; Stuart, H.; Wang, S.; Cutkosky, M.; Edsinger, A.; et al. Ocean One: A Robotic Avatar for Oceanic Discovery. IEEE Robot. Autom. Mag. 2016, 23, 20–29. [Google Scholar] [CrossRef]
  4. Baba, J.; Song, S.; Nakanishi, J.; Yoshikawa, Y.; Ishiguro, H. Local vs. Avatar Robot: Performance and Perceived Workload of Service Encounters in Public Space. Front. Robot. AI 2021, 8, 778753. [Google Scholar] [CrossRef]
  5. Wiener, N. Cybernetics. Bull. Am. Acad. Arts Sci. 1950, 3, 2–4. [Google Scholar] [CrossRef]
  6. Novikov, D.A. Cybernetics: From Past to Future; Springer: Berlin, Germany, 2015. [Google Scholar]
  7. Tamburrini, G.; Datteri, E. Machine Experiments and Theoretical Modelling: From Cybernetic Methodology to Neuro-Robotics. Mind Mach. 2005, 15, 335–358. [Google Scholar] [CrossRef]
  8. Skogerson, G. Embodying robotic art: Cybernetic cinematics. IEEE MultiMedia 2001, 8, 4–7. [Google Scholar] [CrossRef]
  9. Beer, S. What is cybernetics? Kybernetes 2002, 31, 209–219. [Google Scholar] [CrossRef]
  10. Fradkov, A.L. Application of cybernetic methods in physics. Physics-Uspekhi 2005, 48, 103. [Google Scholar] [CrossRef]
  11. Romano, D.; Benelli, G.; Kavallieratos, N.G.; Athanassiou, C.G.; Canale, A.; Stefanini, C. Beetle-robot hybrid interaction: Sex, lateralization and mating experience modulate behavioural responses to robotic cues in the larger grain borer Prostephanus truncatus (Horn). Biol. Cybern. 2020, 114, 473–483. [Google Scholar] [CrossRef] [PubMed]
  12. Park, S.; Jung, Y.; Bae, J. An interactive and intuitive control interface for a tele-operated robot (AVATAR) system. Mechatronics 2018, 55, 54–62. [Google Scholar] [CrossRef]
  13. Escolano, C.; Antelis, J.M.; Minguez, J. A telepresence mobile robot controlled with a noninvasive brain–computer interface. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2011, 8, 793–804. [Google Scholar] [CrossRef]
  14. Moniruzzaman, M.; Rassau, A.; Chai, D.; Islam, S.M. Teleoperation methods and enhancement techniques for Mobile Robots: A comprehensive survey. Robot. Auton. Syst. 2022, 150, 103973. [Google Scholar] [CrossRef]
  15. A Perspective on Robotic Telepresence and Teleoperation Using Cognition: Are We There Yet? Available online: https://arxiv.org/abs/2203.02959 (accessed on 10 August 2023).
  16. Williamson, R. MIT SoFi: A Study in Fabrication, Target Tracking, and Control of Soft Robotic Fish. Bachelor’s Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, February 2022. [Google Scholar]
  17. Mi, J.; Sun, Y.; Wang, Y.; Deng, Z.; Li, L.; Zhang, J.; Xie, G. Gesture recognition based teleoperation framework of robotic fish. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics, Qingdao, China, 3–7 December 2016; pp. 1–6. [Google Scholar]
  18. An Avatar Robot Overlaid with the 3D Human Model of a Remote Operator. Available online: https://arxiv.org/abs/2303.02546 (accessed on 14 September 2023).
  19. Pang, G.; Yang, G.; Pang, Z. Review of Robot Skin: A Potential Enabler for Safe Collaboration, Immersive Teleoperation, and Affective Interaction of Future Collaborative Robots. IEEE Trans. Med. Robot. Bion. 2021, 3, 681–700. [Google Scholar] [CrossRef]
  20. Schwarz, M.; Lenz, C.; Rochow, A.; Schreiber, M.; Behnke, S. NimbRo avatar: Interactive immersive telepresence with force-feedback telemanipulation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Praque, Czech Republic, 27 September–1 October 2021. [Google Scholar]
  21. Li, H.; Nie, X.; Duan, D.; Li, Y.; Zhang, J.; Zhou, M.; Magid, E. An Admittance-Controlled Amplified Force Tracking Scheme for Collaborative Lumbar Puncture Surgical Robot System. Int. J. Med. Robot. Comp. Assis. Surg. 2022, 18, e2428. [Google Scholar] [CrossRef] [PubMed]
  22. Kristoffersson, A.; Coradeschi, S.; Loutfi, A. A Review of Mobile Robotic Telepresence. Adv. Hum.-Comp. Interact. 2013, 2013, 902316. [Google Scholar] [CrossRef]
  23. Ryu, H.X.; Kuo, A.D. An optimality principle for locomotor central pattern generators. Sci. Rep. 2021, 11, 13140. [Google Scholar] [CrossRef]
  24. Ijspeert, A.J. Central pattern generators for locomotion control in animals and robots: A Review. Neural Netw. 2008, 21, 642–653. [Google Scholar] [CrossRef]
  25. Nassour, J.; Henaff, P.; Ouezdou, F.B.; Cheng, G. Multi-layered multi-pattern CPG for adaptive locomotion of humanoid robots. Biol. Cybern. 2014, 108, 291–303. [Google Scholar] [CrossRef]
  26. Yu, J.; Wang, M.; Dong, H.; Zhang, Y.; Wu, Z. Motion Control and Motion Coordination of Bionic Robotic Fish: A Review. J. Biol. Eng. 2018, 15, 579–598. [Google Scholar] [CrossRef]
  27. Mulder, M.; Pool, D.M.; Abbink, D.A.; Boer, E.R.; Zaal, P.M.T.; Drop, F.M.; El, K.; Van Paassen, M.M. Manual Control Cybernetics: State-of-the-Art and Current Trends. IEEE Tran. Hum.-Mach. Syst. 2017, 48, 1–18. [Google Scholar] [CrossRef]
  28. Kastalskiy, I.; Mironov, V.; Lobov, S.; Krilova, N.; Pimashkin, A.; Kazantsev, V. A Neuromuscular Interface for Robotic Devices Control. Comp. Math. Meth. Med. 2018, 2018, 8948145. [Google Scholar] [CrossRef]
  29. Inami, M.; Uriu, D.; Kashino, Z.; Yoshida, S.; Saito, H.; Maekawa, A.; Kitazaki, M. Cyborgs, Human Augmentation, Cybernetics, and JIZAI Body. In Proceedings of the AHs 2022: Augmented Humans, Chiba, Japan, 13–15 March 2022; pp. 230–242. [Google Scholar]
  30. De la Rosa, S.; Lubkull, M.; Stephan, S.; Saulton, A.; Meilinger, T.; Bülthoff, H.; Cañal-Bruland, R. Motor planning and control: Humans interact faster with a human than a robot avatar. J. Vis. 2015, 15, 52. [Google Scholar] [CrossRef]
  31. Osawa, H.; Sono, T. Tele-Nininbaori: Intentional Harmonization in Cybernetic Avatar with Simultaneous Operation by Two-persons. In Proceedings of the HAI’21: Proceedings of the 9th International Conference on Human-Agent Interaction, Virtual Event, Japan, 9–11 November 2021; pp. 235–240. [Google Scholar]
  32. Wang, G.; Chen, X.; Han, S. Central pattern generator and feedforward neural network-based self-adaptive gait control for a crab-like robot locomoting on complex terrain under two reflex mechanisms. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417723440. [Google Scholar] [CrossRef]
  33. Aymerich-Franch, L.; Petit, D.; Ganesh, G.; Kheddar, A. Object Touch by a Humanoid Robot Avatar Induces Haptic Sensation in the Real Hand. J. Comp.-Med. Commun. 2017, 22, 215–230. [Google Scholar] [CrossRef]
  34. Talanov, M.; Suleimanova, A.; Leukhin, A.; Mikhailova, Y.; Toschev, A.; Militskova, A.; Lavrov, I.; Magid, E. Neurointerface implemented with Oscillator Motifs. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Prague, Czech Republic, 27 September–1 October 2021; pp. 4150–4155. [Google Scholar]
  35. Ladrova, M.; Martinek, R.; Nedoma, J.; Fajkus, M. Methods of Power Line Interference Elimination in EMG Signals. Trans. Tech. Publ. 2019, 40, 64–70. [Google Scholar] [CrossRef]
  36. Tigrini, A.; Verdini, F.; Fioretti, S.; Mengarelli, A. On the decoding of shoulder joint intent of motion from transient EMG: Feature evaluation and classificatio. IEEE Trans. Med. Robot. Bion. 2023, 5, 1037–1044. [Google Scholar] [CrossRef]
  37. Ivancevic, V.; Beagley, N. Brain-like functor control machine for general humanoid biodynamics. Intl. J. Math. Math. Sci. 2005, 2005, 171485. [Google Scholar] [CrossRef]
  38. Crespi, A.; Lachat, D.; Pasquier, A.; Ijspeert, A.J. Controlling swimming and crawling in a fish robot using a central pattern generator. Auton. Robot. 2008, 25, 3–13. [Google Scholar] [CrossRef]
  39. Manduca, G.; Santaera, G.; Dario, P.; Stefanini, C.; Romano, D. Underactuated Robotic Fish Control: Maneuverability and Adaptability through Proprioceptive Feedback. In Biomimetic and Biohybrid Systems. Living Machines 2023; Meder, F., Hunt, A., Margheri, L., Mura, A., Mazzolai, B., Eds.; LNCS; Springer: Berlin, Germany, 2023; p. 14157. [Google Scholar]
  40. Garcia-Saura, C. Pattern Generators for the Control of Robotic Systems. arXiv 2015, arXiv:1509.02417. [Google Scholar]
  41. Uematsu, K. Central nervous system underlying fish swimming [A review]. In Bio-Mechanisms of Swimming and Flying; Kato, N., Kamimura, S., Eds.; Springer: Tokyo, Japan, 2008; pp. 103–116. [Google Scholar]
  42. Ki-In, N.; Chang-Soo, P.; In-Bae, J.; Seungbeom, H.; Jong-Hwan, K. Locomotion generator for robotic fish using an evolutionary optimized central pattern generator. In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Tianjin, China, 14–18 December 2010. [Google Scholar]
  43. Xie, F.; Zhong, Y.; Kwok, M.F.; Du, R. Central Pattern Generator Based Control of a Wire-driven Robot Fish. In Proceedings of the IEEE International Conference on Information and Automation, Wuyishan, China, 11–13 August 2018; pp. 475–480. [Google Scholar]
  44. Wang, M.; Yu, J.; Tan, M. Parameter Design for a Central Pattern Generator Based Locomotion Controller. In Proceedings of the ICIRA 2008, Wuhan, China, 15–17 October 2008; pp. 352–361. [Google Scholar]
  45. Wang, W.; Guo, J.; Wang, Z.; Xie, G. Neural controller for swimming modes and gait transition on an ostraciiform fish robot. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Wollongong, NSW, Australia, 9–12 July 2013; pp. 1564–1569. [Google Scholar]
Figure 1. Cybernetic robotic avatar system architecture. Signal electrodes (green circles) and ground electrodes (red circles) are experimentally positioned on the Flexor Digitorum Superficialis, Flexor Digitorum Profundus, and Flexor Carpi muscles.
Figure 1. Cybernetic robotic avatar system architecture. Signal electrodes (green circles) and ground electrodes (red circles) are experimentally positioned on the Flexor Digitorum Superficialis, Flexor Digitorum Profundus, and Flexor Carpi muscles.
Machines 12 00124 g001
Figure 2. Robot’s underactuated mechanisms and sensory system onboard.
Figure 2. Robot’s underactuated mechanisms and sensory system onboard.
Machines 12 00124 g002
Figure 3. Experimental raw EMG data (from left to right): (a) Left and right hand, left and right thumb. (b) Left and right index, left and right middle. (c) Left and right ring, left and right little.
Figure 3. Experimental raw EMG data (from left to right): (a) Left and right hand, left and right thumb. (b) Left and right index, left and right middle. (c) Left and right ring, left and right little.
Machines 12 00124 g003
Figure 4. Notch-filtered EMG showing one period. (a) Right hand. (b) Right index. (c) Right middle.
Figure 4. Notch-filtered EMG showing one period. (a) Right hand. (b) Right index. (c) Right middle.
Machines 12 00124 g004
Figure 5. Components of hand pattern space: filters γ , λ , and Ω . (a) Filters applied to the left hand. (b) Filters applied to the right hand.
Figure 5. Components of hand pattern space: filters γ , λ , and Ω . (a) Filters applied to the left hand. (b) Filters applied to the right hand.
Machines 12 00124 g005
Figure 6. EMG stimuli pattern space ( γ , λ , Ω ). (a) Left hand classes. (b) Right hand classes.
Figure 6. EMG stimuli pattern space ( γ , λ , Ω ). (a) Left hand classes. (b) Right hand classes.
Machines 12 00124 g006
Figure 7. Multi-layered ANN for EMG pattern recognition.
Figure 7. Multi-layered ANN for EMG pattern recognition.
Machines 12 00124 g007
Figure 8. Sequence of mixed EMG stimuli over time and ANN’s decimal output with 100% classification success. (a) Right limb. (b) Left limb.
Figure 8. Sequence of mixed EMG stimuli over time and ANN’s decimal output with 100% classification success. (a) Right limb. (b) Left limb.
Machines 12 00124 g008
Figure 9. Swimming behavior fuzzy sets. (a) Crisp input (ANN’s output). (b) Input of robot’s thrust velocity observation. (c) Input of robot’s angular speed observation.
Figure 9. Swimming behavior fuzzy sets. (a) Crisp input (ANN’s output). (b) Input of robot’s thrust velocity observation. (c) Input of robot’s angular speed observation.
Machines 12 00124 g009
Figure 10. Three identical output fuzzy sets depicting the oscillation period of the caudal tail’s undulation.
Figure 10. Three identical output fuzzy sets depicting the oscillation period of the caudal tail’s undulation.
Machines 12 00124 g010
Figure 11. Oscillation functions for the caudal tail (measured in radians) synchronized with dual Boolean coil step patterns (dimensionless), presented on a consistent time scale: (a) Forward undulations. (b) Right-turn undulations. (c) Left-turn undulations.
Figure 11. Oscillation functions for the caudal tail (measured in radians) synchronized with dual Boolean coil step patterns (dimensionless), presented on a consistent time scale: (a) Forward undulations. (b) Right-turn undulations. (c) Left-turn undulations.
Machines 12 00124 g011
Figure 12. Model of the robot fish mechanism, illustrating: (a) Top view of the musculoskeletal system. (b) Top view of the robot’s head with antagonistic muscle-based electromagnetic oscillator. (c) Side view of the ballast system device positioned beneath the robot’s head.
Figure 12. Model of the robot fish mechanism, illustrating: (a) Top view of the musculoskeletal system. (b) Top view of the robot’s head with antagonistic muscle-based electromagnetic oscillator. (c) Side view of the ballast system device positioned beneath the robot’s head.
Machines 12 00124 g012
Figure 13. Ballasting system of the robot fish. (a) Detailed 3D model of the robot fish with the ballast device positioned beneath its floor. (b) Components of the basic ballasting device designed for modeling and control purposes.
Figure 13. Ballasting system of the robot fish. (a) Detailed 3D model of the robot fish with the ballast device positioned beneath its floor. (b) Components of the basic ballasting device designed for modeling and control purposes.
Machines 12 00124 g013
Table 1. Pertinent related work: comprehensive comparison.
Table 1. Pertinent related work: comprehensive comparison.
Research TopicReferencesDistinctive Aspect of This Study
Remote mobile robots[13,28]Swimming response from
HRI teleoperation[16,17]biological EMG stimuli.
Teleoperation and telepresence HRI reviews,[12,14]Haptic perception robot to human.
techniques, and applications[15,29]Cybernetic control human to robot.
Telepresence by avatar[18,30,31]Haptic and 2D visual data avatar
and immersion systems[20]and neuromuscular control response.
Central pattern generator (CPG);[23,24]Neuro-fuzzy caudal swim
neural and locomotion studies[25,32]undulation pattern generator.
Human–robot collaboration[19,21,22]Reactive swimming by remote human
haptics and teleoperation[33]stimuli and haptic robot feedback.
Cybernetic control[9,10,11]Underactuated biomechanical model and propulsive
and bionic systems[26,27]electromagnetic oscillator.
Table 2. Comparative results of neural network architectures for EMG classification.
Table 2. Comparative results of neural network architectures for EMG classification.
MeasuresFeedforward MultilayerConvolutionalCompeting Self-Organizing Map
Training epochs100100100
Classification rate 69.85 % 24.17 % 23.88 %
Training time rate 11 0.9046 0.9861
Number of hidden layers551
Neurons per hidden layer202032
Neurons per output layer444
Total neurons10410432
1 The training dataset comprised 103,203 samples. The training time rate was set to 1 as the comparative reference.
Table 3. ANN results of mapping EMG to robotic avatar swimming behaviors.
Table 3. ANN results of mapping EMG to robotic avatar swimming behaviors.
ANN’s EMG Inputsy3y2y1y0Swimming Style 1
quiet0000Sink
right hand0001Buoyant
right thumb0010Gliding
right index0011Slow thrusting
right middle0100Medium thrusting
right ring0101Fast thrusting
right little0110Slow right maneuvering
left hand0111Medium right maneuvering
left thumb1000Fast right maneuvering
left index1001Slow left maneuvering
left middle1010Medium left maneuvering
left ring1011Fast left maneuvering
left little1100Speed up right turn
both index1101Speed up left turn
right thumb–little1110Slow down right turn
left thumb–little1111Slow down left turn
1 The variables y 0 , 1 , 2 , 3 represent combinatory outputs, while subsequently, y C corresponds to the decimal value.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Montoya Martínez, M.A.; Torres-Córdoba, R.; Magid, E.; Martínez-García, E.A. Electromyography-Based Biomechanical Cybernetic Control of a Robotic Fish Avatar. Machines 2024, 12, 124. https://doi.org/10.3390/machines12020124

AMA Style

Montoya Martínez MA, Torres-Córdoba R, Magid E, Martínez-García EA. Electromyography-Based Biomechanical Cybernetic Control of a Robotic Fish Avatar. Machines. 2024; 12(2):124. https://doi.org/10.3390/machines12020124

Chicago/Turabian Style

Montoya Martínez, Manuel A., Rafael Torres-Córdoba, Evgeni Magid, and Edgar A. Martínez-García. 2024. "Electromyography-Based Biomechanical Cybernetic Control of a Robotic Fish Avatar" Machines 12, no. 2: 124. https://doi.org/10.3390/machines12020124

APA Style

Montoya Martínez, M. A., Torres-Córdoba, R., Magid, E., & Martínez-García, E. A. (2024). Electromyography-Based Biomechanical Cybernetic Control of a Robotic Fish Avatar. Machines, 12(2), 124. https://doi.org/10.3390/machines12020124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop