Next Article in Journal
Exploring Robot Connectivity and Collaborative Sensing in a High-School Enrichment Program
Previous Article in Journal
Design of a Five-Degrees of Freedom Statically Balanced Mechanism with Multi-Directional Functionality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Human-Robot Interactions for Multiple Unmanned Aerial Vehicles

1
School of Engineering, RMIT University, Bundoora, VIC 3083, Australia
2
Saab-NTU Joint Lab, Nanyang Technological University, Singapore 637460, Singapore
3
THALES Australia, WTC North Wharf, Melbourne, VIC 3000, Australia
4
Northrop Grumman Corporation, 1550 W. Nursery Rd, Linthicum Heights, MD 21090, USA
*
Author to whom correspondence should be addressed.
Approved for Public Release; Distribution is Unlimited; #20-2471; Dated 4 January 2021.
Robotics 2021, 10(1), 12; https://doi.org/10.3390/robotics10010012
Submission received: 10 November 2020 / Revised: 16 December 2020 / Accepted: 18 December 2020 / Published: 7 January 2021
(This article belongs to the Section Aerospace Robotics and Autonomous Systems)

Abstract

Advances in unmanned aircraft systems (UAS) have paved the way for progressively higher levels of intelligence and autonomy, supporting new modes of operation, such as the one-to-many (OTM) concept, where a single human operator is responsible for monitoring and coordinating the tasks of multiple unmanned aerial vehicles (UAVs). This paper presents the development and evaluation of cognitive human-machine interfaces and interactions (CHMI2) supporting adaptive automation in OTM applications. A CHMI2 system comprises a network of neurophysiological sensors and machine-learning based models for inferring user cognitive states, as well as the adaptation engine containing a set of transition logics for control/display functions and discrete autonomy levels. Models of the user’s cognitive states are trained on past performance and neurophysiological data during an offline calibration phase, and subsequently used in the online adaptation phase for real-time inference of these cognitive states. To investigate adaptive automation in OTM applications, a scenario involving bushfire detection was developed where a single human operator is responsible for tasking multiple UAV platforms to search for and localize bushfires over a wide area. We present the architecture and design of the UAS simulation environment that was developed, together with various human-machine interface (HMI) formats and functions, to evaluate the CHMI2 system’s feasibility through human-in-the-loop (HITL) experiments. The CHMI2 module was subsequently integrated into the simulation environment, providing the sensing, inference, and adaptation capabilities needed to realise adaptive automation. HITL experiments were performed to verify the CHMI2 module’s functionalities in the offline calibration and online adaptation phases. In particular, results from the online adaptation phase showed that the system was able to support real-time inference and human-machine interface and interaction (HMI2) adaptation. However, the accuracy of the inferred workload was variable across the different participants (with a root mean squared error (RMSE) ranging from 0.2 to 0.6), partly due to the reduced number of neurophysiological features available as real-time inputs and also due to limited training stages in the offline calibration phase. To improve the performance of the system, future work will investigate the use of alternative machine learning techniques, additional neurophysiological input features, and a more extensive training stage.
Keywords: aerial robotics; autonomous systems; avionics; unmanned aircraft system; unmanned aerial vehicle; remote pilot; human-in-the-loop; human-machine system; human-machine interface; human-machine interaction; human-robot interactions; mental workload; neurophysiology; neuroergonomics; neurophysiological response; cyber-physical systems aerial robotics; autonomous systems; avionics; unmanned aircraft system; unmanned aerial vehicle; remote pilot; human-in-the-loop; human-machine system; human-machine interface; human-machine interaction; human-robot interactions; mental workload; neurophysiology; neuroergonomics; neurophysiological response; cyber-physical systems

Share and Cite

MDPI and ACS Style

Lim, Y.; Pongsakornsathien, N.; Gardi, A.; Sabatini, R.; Kistan, T.; Ezer, N.; Bursch, D.J. Adaptive Human-Robot Interactions for Multiple Unmanned Aerial Vehicles. Robotics 2021, 10, 12. https://doi.org/10.3390/robotics10010012

AMA Style

Lim Y, Pongsakornsathien N, Gardi A, Sabatini R, Kistan T, Ezer N, Bursch DJ. Adaptive Human-Robot Interactions for Multiple Unmanned Aerial Vehicles. Robotics. 2021; 10(1):12. https://doi.org/10.3390/robotics10010012

Chicago/Turabian Style

Lim, Yixiang, Nichakorn Pongsakornsathien, Alessandro Gardi, Roberto Sabatini, Trevor Kistan, Neta Ezer, and Daniel J. Bursch. 2021. "Adaptive Human-Robot Interactions for Multiple Unmanned Aerial Vehicles" Robotics 10, no. 1: 12. https://doi.org/10.3390/robotics10010012

APA Style

Lim, Y., Pongsakornsathien, N., Gardi, A., Sabatini, R., Kistan, T., Ezer, N., & Bursch, D. J. (2021). Adaptive Human-Robot Interactions for Multiple Unmanned Aerial Vehicles. Robotics, 10(1), 12. https://doi.org/10.3390/robotics10010012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop