Effective Behavioural Dynamic Coupling through Echo State Networks
Abstract
:1. Introduction
2. Material and Methods
2.1. Echo State Network
Echo State Network’s Dynamics Formalisation
2.2. Training Procedure
Sampling
2.3. Intrinsic Plasticity
2.4. Parametrisation of the System
3. Experimental Setup
3.1. Technical Details
3.2. Input Signal
3.3. Testing Cases
- We tested the validity of the proposed method on a publicly available dataset, the Cornell Activity Dataset (CAD-120) [40]. This, in order to assess the quality of the work presented and to provide evidence of the generalisation capability and flexibility of the proposed method. Although CAD is rather distant to the application field of the proposed method, it allows for the comparison of our method against a baseline, while also shows the applicability of the setup regardless the type of input signal. The results of our method are reported and compared to alternative state-of-the-art computational methods.
- In order to investigate the system in more detail, an extra set of sequences was recorded by the experimenter using the Leap Motion as input device. This has produced a new dataset on which the proposed system has been further tested, labelled in this work as (Dataset Testing). I this way, we are able to test the system with input sequences more applicable to our specific interest of robot control and also highlight general characteristics of the system.
- A small number of users were asked to control a simulated robot, visible on the screen of a computer, using the proposed system and the Leap Motion as input device. We refer to this test within this work as User Testing. In this phase of the testing, the users were asked to perform gestures using the Leap Motion device, in relation to behaviours of the simulated robot shown by the experimenter on the computer screen. It is important to mention here that the users were not instructed on the kind of gestures they should use in order to control the robot, allowing them to freely manipulate the input device at their own preferences. This resulted to different gestures being used by the users in relation to the same robot behaviour shown to them. This fact indicates the flexibility of the proposed system in personalising the control sequences and the associations between the user’s gestures and the robot behaviours. Since the gestures performed by the users were different for each one of them, based on their preference, only the accuracy of the system is reported under this setup. Once the ESN was trained with the gestures performed by the users, they were asked to control the robot using their own provided gestures.
3.3.1. CAD-120 Testing
3.3.2. Dataset Testing
3.3.3. User Testing
4. Results
4.1. CAD-120 Testing
4.2. Dataset Testing
4.3. User Testing
5. Properties of the Echo-State Network and Human-Machine Interface
5.1. Variability in Pattern Length
5.2. Continuous Mapping from the Raw Data Input
5.3. Geometrical Properties of the Input
5.4. Recognition Before the End of the Sequence
6. Conclusions
Author Contributions
Conflicts of Interest
References
- Kadous, M.W.; Sheh, R.K.M.; Sammut, C. Effective user interface design for rescue robotics. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 250–257. [Google Scholar]
- Shneiderman, B. Designing the User Interface-Strategies for Effective Human-Computer Interaction; Pearson Education: Chennai, India, 1986. [Google Scholar]
- Melidis, C.; Iizuka, H.; Marocco, D. Intuitive control of mobile robots: An architecture for autonomous adaptive dynamic behaviour integration. Cognit. Process. 2018, 19, 245–264. [Google Scholar] [CrossRef]
- Yin, Y. Real-Time Continuous Gesture Recognition for Natural Multimodal Interaction. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2014; pp. 1–8. [Google Scholar]
- Bodiroža, S.; Stern, H.I.; Edan, Y. Dynamic gesture vocabulary design for intuitive human-robot dialog. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12, Boston, MA, USA, 5–8 March 2012; p. 111. [Google Scholar] [CrossRef]
- Mitra, S.; Acharya, T. Gesture recognition: A survey. IEEE Trans. Syst. Man Cybern. Part C 2007, 37, 311–324. [Google Scholar] [CrossRef]
- Neverova, N.; Wolf, C.; Taylor, G.W.; Nebout, F. Multi-scale deep learning for gesture detection and localization. In Workshop at the European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 474–490. [Google Scholar]
- Vishwakarma, D.K.; Grover, V. Hand gesture recognition in low-intensity environment using depth images. In Proceedings of the 2017 International Conference on Intelligent Sustainable Systems (ICISS), Palladam, India, 7–8 December 2017; pp. 429–433. [Google Scholar]
- Liu, H.; Wang, L. Gesture recognition for human-robot collaboration: A review. Int. J. Ind. Ergon. 2018, 68, 355–367. [Google Scholar] [CrossRef]
- Liarokapis, M.V.; Artemiadis, P.K.; Katsiaris, P.T.; Kyriakopoulos, K.J.; Manolakos, E.S. Learning human reach-to-grasp strategies: Towards EMG-based control of robotic arm-hand systems. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 2287–2292. [Google Scholar]
- Bodiroža, S.; Doisy, G.; Hafner, V.V. Position-invariant, real-time gesture recognition based on dynamic time warping. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 87–88. [Google Scholar]
- Ren, Z.; Yuan, J.; Meng, J.; Zhang, Z. Robust Part-Based Hand Gesture Recognition Using Kinect Sensor. IEEE Trans. Multimedia 2013, 15, 1110–1120. [Google Scholar] [CrossRef] [Green Version]
- Xu, D.; Wu, X.; Chen, Y.L.; Xu, Y. Online dynamic gesture recognition for human robot interaction. J. Intell. Robot. Syst. 2015, 77, 583–596. [Google Scholar] [CrossRef]
- Xu, D.; Chen, Y.L.; Lin, C.; Kong, X.; Wu, X. Real-time dynamic gesture recognition system based on depth perception for robot navigation. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012; pp. 689–694. [Google Scholar] [CrossRef]
- Molchanov, P.; Yang, X.; Gupta, S.; Kim, K.; Tyree, S.; Kautz, J. Online Detection and Classification of Dynamic Hand Gestures With Recurrent 3D Convolutional Neural Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4207–4215. [Google Scholar]
- Bailador, G.; Roggen, D.; Tröster, G.; Triviño, G. Real time gesture recognition using continuous time recurrent neural networks. In Proceedings of the ICST 2nd International Conference on Body Area Networks, Florence, Italy, 11–13 June 2007; p. 15. [Google Scholar]
- Maung, T.H.H. Real-time hand tracking and gesture recognition system using neural networks. World Acad. Sci. Eng. Technol. 2009, 50, 466–470. [Google Scholar]
- Tsironi, E.; Barros, P.; Wermter, S. Gesture Recognition with a Convolutional Long Short-Term Memory Recurrent Neural Network. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, 27–29 April 2016; pp. 213–218. [Google Scholar]
- Jirak, D.; Barros, P.; Wermter, S. Dynamic Gesture Recognition Using Echo State Networks; Presses Universitaires de Louvain: Louvain-la-Neuve, Belgium, 2015; p. 475. [Google Scholar]
- Wu, D.; Pigou, L.; Kindermans, P.J.; Le, N.D.H.; Shao, L.; Dambre, J.; Odobez, J.M. Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1583–1597. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Čerňanský, M.; Tiňo, P. Comparison of echo state networks with simple recurrent networks and variable-length Markov models on symbolic sequences. In Proceedings of the Artificial Neural Networks—ICANN 2007, Porto, Portugal, 9–13 September 2007; pp. 618–627. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Lefebvre, G.; Berlemont, S.; Mamalet, F.; Garcia, C. Inertial gesture recognition with blstm-rnn. In Artificial Neural Networks; Springer: Cham, Switzerland, 2015; pp. 393–410. [Google Scholar]
- Hu, Y.; Wong, Y.; Wei, W.; Du, Y.; Kankanhalli, M.; Geng, W. A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition. PLoS ONE 2018, 13, e0206049. [Google Scholar] [CrossRef]
- Sheng, C.; Zhao, J.; Liu, Y.; Wang, W. Prediction for noisy nonlinear time series by echo state network based on dual estimation. Neurocomputing 2012, 82, 186–195. [Google Scholar] [CrossRef]
- Laje, R.; Buonomano, D.V. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 2013, 16, 925–933. [Google Scholar] [CrossRef]
- Sussillo, D.; Abbott, L.F. Generating coherent patterns of activity from chaotic neural networks. Neuron 2009, 63, 544–557. [Google Scholar] [CrossRef]
- Büsing, L.; Schrauwen, B.; Legenstein, R. Connectivity, dynamics, and memory in reservoir computing with binary and analog neurons. Neural Comput. 2010, 22, 1272–1311. [Google Scholar] [CrossRef]
- Nweke, H.F.; Teh, Y.W.; Al-Garadi, M.A.; Alo, U.R. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 2018, 105, 233–261. [Google Scholar] [CrossRef]
- Jaeger, H. The “echo state” Approach to Analysing and Training Recurrent Neural Networks-with an Erratum Note; Technical Report; German National Research Center for Information Technology (GMD): Bonn, Germany, 2001; Volume 148, p. 34. [Google Scholar]
- Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef]
- Lukosevicius, M. A Practical Guide to Applying Echo State Networks. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Jaeger, H. A Tutorial on Ttraining Recurrent Neural Networks, Covering bppt, rtrl, ekf and the ‘Echo State Network’ Approach; Frauenhofer Institue for Autonomous Intelligent: Sankt Augustin, Germany, 2005; pp. 1–46. [Google Scholar]
- Manjunath, G.; Jaeger, H. Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks. Neural Comput. 2013, 25, 671–696. [Google Scholar] [CrossRef] [PubMed]
- Reinhart, R.F.; Steil, J.J. Reaching movement generation with a recurrent neural network based on learning inverse kinematics for the humanoid robot iCub. In Proceedings of the 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09, Paris, France, 7–10 December 2009; pp. 323–330. [Google Scholar] [CrossRef]
- Triesch, J. A gradient rule for the plasticity of a neuron’s intrinsic excitability. In Artificial Neural Networks: Biological Inspirations; Springer: Berlin/Heidelberg, Germany, 2005; pp. 65–70. [Google Scholar]
- Steil, J.J. Online reservoir adaptation by intrinsic plasticity for backpropagation–decorrelation and echo state learning. Neural Netw. 2007, 20, 353–364. [Google Scholar] [CrossRef] [PubMed]
- Schrauwen, B.; Wardermann, M.; Verstraeten, D.; Steil, J.J.; Stroobandt, D. Improving reservoirs using intrinsic plasticity. Neurocomputing 2008, 71, 1159–1171. [Google Scholar] [CrossRef] [Green Version]
- Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv, 2016; arXiv:1605.02688. [Google Scholar]
- Koppula, H.S.; Gupta, R.; Saxena, A. Learning human activities and object affordances from rgb-d videos. Int. J. Robot. Res. 2013, 32, 951–970. [Google Scholar] [CrossRef]
- Mici, L.; Hinaut, X.; Wermter, S. Activity recognition with echo state networks using 3D body joints and objects category. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, 27–29 April 2016; pp. 465–470. [Google Scholar]
- Gavrila, D.M. The visual analysis of human movement: A survey. Comput. Vis. Image Underst. 1999, 73, 82–98. [Google Scholar] [CrossRef]
- Kendon, A. Current issues in the study of gesture. Biol. Found. Gestures: Motor Semiot. Asp. 1986, 1, 23–47. [Google Scholar]
- Rybok, L.; Schauerte, B.; Al-Halah, Z.; Stiefelhagen, R. “Important stuff, everywhere!” Activity recognition with salient proto-objects as context. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, USA, 24–26 March 2014; pp. 646–651. [Google Scholar]
- Weber, C.; Masui, K.; Mayer, N.M.; Triesch, J.; Asada, M. Reservoir Computing for Sensory Prediction and Classification in Adaptive Agents. In Machine Learning Research Progress; Nova publishers: Hauppauge, NY, USA, 2008. [Google Scholar]
- Card, S.K.; Robertson, G.G.; Mackinlay, J.D. The information visualizer, an information workspace. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 27 April–2 May 1991; pp. 181–186. [Google Scholar]
- Miller, R.B. Response time in man-computer conversational transactions. In Proceedings of the Fall Joint Computer Conference, Part I, San Francisco, CA, USA, 9–11 December 1968; pp. 267–277. [Google Scholar]
Description | ID | Sequence Length |
---|---|---|
Push | 0 | 150 |
Pull | 1 | 195 |
Swipe right | 2 | 144 |
Swipe left | 3 | 129 |
Clockwise Circle | 4 | 225 |
Anti-Clockwise Circle | 5 | 147 |
Up-Down | 6 | 147 |
Description | Training | Testing |
---|---|---|
Push | 0.99 | 0.99 |
Pull | 0.99 | 0.99 |
Swipe right | 0.99 | 0.98 |
Swipe left | 0.99 | 0.72 |
Clockwise Circle | 0.99 | 0.89 |
Anti-Clockwise Circle | 0.99 | 0.70 |
Up-Down | 0.96 | 0.88 |
Mean | 0.98 | 0.87 |
Participant | G1 | G2 | G3 | G4 |
---|---|---|---|---|
P1 | 2613 | 841 | 975 | 1142 |
P2 | 210 | 180 | 192 | 121 |
P3 | 721 | 619 | 360 | 701 |
P4 | 205 | 409 | 384 | 602 |
P5 | 187 | 155 | 68 | 101 |
P6 | 207 | 128 | 203 | 266 |
P7 | 604 | 614 | 436 | 596 |
P8 | 241 | 521 | 522 | 492 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Melidis, C.; Marocco, D. Effective Behavioural Dynamic Coupling through Echo State Networks. Appl. Sci. 2019, 9, 1300. https://doi.org/10.3390/app9071300
Melidis C, Marocco D. Effective Behavioural Dynamic Coupling through Echo State Networks. Applied Sciences. 2019; 9(7):1300. https://doi.org/10.3390/app9071300
Chicago/Turabian StyleMelidis, Christos, and Davide Marocco. 2019. "Effective Behavioural Dynamic Coupling through Echo State Networks" Applied Sciences 9, no. 7: 1300. https://doi.org/10.3390/app9071300
APA StyleMelidis, C., & Marocco, D. (2019). Effective Behavioural Dynamic Coupling through Echo State Networks. Applied Sciences, 9(7), 1300. https://doi.org/10.3390/app9071300