Figure 1.
Facial expression analysis process based on AI.
Figure 1.
Facial expression analysis process based on AI.
Figure 2.
Basic workflow of genetic algorithm.
Figure 2.
Basic workflow of genetic algorithm.
Figure 4.
Illustration of mutation.
Figure 4.
Illustration of mutation.
Figure 5.
Control interfaces for remote operation of disaster-relief robots. (a) Pre-optimization control interface; (b) post-optimization control interface. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 5.
Control interfaces for remote operation of disaster-relief robots. (a) Pre-optimization control interface; (b) post-optimization control interface. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 6.
Experimental scenario for control interface evaluation. Note: The portrait in this image has been authorized by the individual.
Figure 6.
Experimental scenario for control interface evaluation. Note: The portrait in this image has been authorized by the individual.
Figure 7.
Experimental procedure for control interface evaluation. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 7.
Experimental procedure for control interface evaluation. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 8.
Facial expression analysis for control interface evaluation. Note: The portrait in this image has been authorized by the individual.
Figure 8.
Facial expression analysis for control interface evaluation. Note: The portrait in this image has been authorized by the individual.
Figure 9.
Mean emotional valence of pre- and post-optimized control interfaces.
Figure 9.
Mean emotional valence of pre- and post-optimized control interfaces.
Figure 10.
Eye-tracking heatmaps of pre- and post-optimized control interfaces. (a) Pre-optimization; (b) post-optimization. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 10.
Eye-tracking heatmaps of pre- and post-optimized control interfaces. (a) Pre-optimization; (b) post-optimization. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 11.
Eye-tracking gaze plots of pre- and post-optimized control interfaces. (a) Pre-optimization; (b) post-optimization. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 11.
Eye-tracking gaze plots of pre- and post-optimized control interfaces. (a) Pre-optimization; (b) post-optimization. Note: The text displayed on the control interface represents (1) the functional descriptions of the remote operation of disaster-relief robots, including: start, stop, path finding, obstacle avoidance, robotic arm grip, movement, robotic arm release, robotic arm positioning, robotic arm retraction and robotic arm control; (2) the rescue environment and system information presented on the control interface, including: temperature, humidity, battery level, risk factor, and video channels.
Figure 12.
Eye-tracking metrics of pre- and post-optimized control interfaces. (a) Fixation duration; (b) fixation count; (c) total scan path length; (d) average scan path length.
Figure 12.
Eye-tracking metrics of pre- and post-optimized control interfaces. (a) Fixation duration; (b) fixation count; (c) total scan path length; (d) average scan path length.
Figure 13.
Mean scores of NASA-TLX cognitive load questionnaire of pre- and post-optimized control interfaces.
Figure 13.
Mean scores of NASA-TLX cognitive load questionnaire of pre- and post-optimized control interfaces.
Figure 14.
Mean task completion time of pre- and post-optimized control interfaces.
Figure 14.
Mean task completion time of pre- and post-optimized control interfaces.
Figure 15.
Experimental scenario for predicting robot appearance. Note: The portrait in this image has been authorized by the individual.
Figure 15.
Experimental scenario for predicting robot appearance. Note: The portrait in this image has been authorized by the individual.
Figure 16.
Experimental procedure for predicting robot appearance.
Figure 16.
Experimental procedure for predicting robot appearance.
Figure 17.
Heatmap, trajectory map, and AOI division of robot appearance. (a) Eye-tracking heatmap; (b) eye movement trajectories map; (c) robot AOI division, where AOI1 represents the robot’s head, AOI2 represents the robot’s torso, AOI3 represents the robot’s arms, AOI4 represents the robot’s hands, AOI5 represents the robot’s legs, AOI6 represents the robot’s feet, and AOI7 represents the robot as a whole.
Figure 17.
Heatmap, trajectory map, and AOI division of robot appearance. (a) Eye-tracking heatmap; (b) eye movement trajectories map; (c) robot AOI division, where AOI1 represents the robot’s head, AOI2 represents the robot’s torso, AOI3 represents the robot’s arms, AOI4 represents the robot’s hands, AOI5 represents the robot’s legs, AOI6 represents the robot’s feet, and AOI7 represents the robot as a whole.
Figure 18.
Fixation duration in different AOIs of the robot appearance.
Figure 18.
Fixation duration in different AOIs of the robot appearance.
Figure 19.
Neural network structure.
Figure 19.
Neural network structure.
Figure 20.
Comparison of model predicted values with actual values.
Figure 20.
Comparison of model predicted values with actual values.
Table 1.
Corresponding codes of control keys.
Table 1.
Corresponding codes of control keys.
Keys | Encoding | Keys | Encoding |
---|
Directional Button Group | 1 | Auto Obstacle Avoidance Enable | 11 |
Low Speed Mode | 2 | Auto Obstacle Avoidance Disable | 12 |
High Speed Mode | 3 | Auto Path Finding Enable | 13 |
Robotic Arm Positioning | 4 | Auto Path Finding Disable | 14 |
Robotic Arm Retraction | 5 | Movement Precision Increase | 15 |
Robotic Arm Grip | 6 | Movement Precision Decrease | 16 |
Robotic Arm Release | 7 | Clockwise Rotation | 17 |
Robotic Arm Segment 1 | 8 | Counter-clockwise Rotation | 18 |
Robotic Arm Segment 2 | 9 | Robotic Arm Control | 19 |
Robotic Arm Segment 3 | 10 | | |
Table 2.
Descriptive statistics of facial expression data.
Table 2.
Descriptive statistics of facial expression data.
Interface Status | Mean | 25th Pctl | Median | 75th Pctl | SD | Min | Max |
---|
Pre-Optimization | −0.2907 | −0.3815 | −0.3146 | −0.1890 | 0.1081 | −0.4900 | −0.1000 |
Post-Optimization | −0.2208 | −0.3097 | −0.2045 | −0.1310 | 0.1494 | −0.5200 | 0.0700 |
Table 3.
Significance test of facial expression data.
Table 3.
Significance test of facial expression data.
Metric | t Value | Sig |
---|
Emotional Valence | −2.076 | 0.042 |
Table 4.
Descriptive statistics of eye-tracking metrics.
Table 4.
Descriptive statistics of eye-tracking metrics.
Metric | Interface Status | Mean | 25th Pctl | Median | 75th Pctl | SD | Min | Max |
---|
Fixation Duration (ms) | Pre-Optimization | 24,995.14 | 17,771.60 | 25,818.60 | 34,494.28 | 10,514.00 | 3747.00 | 44,884.60 |
Post-Optimization | 15,840.74 | 10,259.18 | 16,584.00 | 20,320.18 | 8916.89 | 2373.50 | 39,181.90 |
Fixation Count | Pre-Optimization | 102.83 | 80.00 | 103.00 | 122.75 | 35.11 | 32.00 | 198.00 |
Post-Optimization | 70.13 | 46.00 | 65.50 | 88.50 | 30.34 | 29.00 | 151.00 |
Total Scan Path Length (px) | Pre-Optimization | 16,299.53 | 12,897.75 | 16,162.50 | 19,948.75 | 5384.34 | 3417.00 | 28,275.00 |
Post-Optimization | 7829.50 | 5319.50 | 7346.00 | 9237.50 | 3266.50 | 3327.00 | 18,439.00 |
Average Scan Path Length (px/s) | Pre-Optimization | 440.16 | 388.86 | 453.24 | 499.07 | 97.89 | 132.68 | 613.25 |
Post-Optimization | 302.33 | 233.24 | 319.23 | 365.46 | 76.12 | 95.41 | 404.61 |
Table 5.
Significance test of eye-tracking metrics.
Table 5.
Significance test of eye-tracking metrics.
| Fixation Duration | Fixation Count | Total Scan Path Length | Average Scan Path Length |
---|
z Value | | | −5.515 | |
t Value | 3.637 | 3.86 | | 6.088 |
Sig | 0.001 | <0.001 | <0.001 | <0.001 |
Cohen’s d | 0.939 | 0.997 | 1.902 | 1.572 |
Table 6.
Descriptive statistics of behavioral data.
Table 6.
Descriptive statistics of behavioral data.
Metric | Interface Status | Mean | 25th Pctl | Median | 75th Pctl | SD | Min | Max |
---|
Mental Demand | Pre-Optimization | 69.73 | 60.00 | 73.00 | 89.25 | 21.30 | 14.00 | 97.00 |
Post-Optimization | 13.10 | 5.00 | 10.00 | 17.00 | 11.12 | 0.00 | 45.00 |
Physical Demand | Pre-Optimization | 20.73 | 4.50 | 10.00 | 25.75 | 24.96 | 0.00 | 85.00 |
Post-Optimization | 6.23 | 1.00 | 5.00 | 5.25 | 8.34 | 0.00 | 36.00 |
Temporal Demand | Pre-Optimization | 63.97 | 50.00 | 64.00 | 81.25 | 22.73 | 6.00 | 98.00 |
Post-Optimization | 16.53 | 8.75 | 12.50 | 23.75 | 13.05 | 1.00 | 50.00 |
Effort | Pre-Optimization | 58.03 | 35.75 | 60.00 | 77.50 | 24.21 | 13.00 | 98.00 |
Post-Optimization | 12.57 | 5.00 | 10.00 | 18.50 | 10.13 | 0.00 | 40.00 |
Performance | Pre-Optimization | 50.93 | 10.00 | 61.50 | 81.25 | 33.84 | 0.00 | 100.00 |
Post-Optimization | 10.00 | 4.25 | 10.00 | 15.00 | 9.02 | 0.00 | 40.00 |
Frustration Level | Pre-Optimization | 46.17 | 13.75 | 13.75 | 71.25 | 32.47 | 0.00 | 99.00 |
Post-Optimization | 6.03 | 0.75 | 5.00 | 10.00 | 7.16 | 0.00 | 33.00 |
Task Completion Time (ms) | Pre-Optimization | 37,662.67 | 28,680.00 | 34,095.50 | 44,395.75 | 12,445.68 | 18,041.00 | 66,077.00 |
Post-Optimization | 26,415.57 | 20,845.75 | 23,384.50 | 29,139.00 | 9640.10 | 14,231.00 | 58,435.00 |
Table 7.
Significance test of behavioral data.
Table 7.
Significance test of behavioral data.
| Mental Demand | Physical Demand | Temporal Demand | Effort | Performance | FrustrationLevel | Task CompletionTime |
---|
z Value | −6.384 | −3.078 | −6.026 | −6.229 | −4.133 | −4.564 | −6.377 |
Sig | <0.001 | 0.002 | <0.001 | <0.001 | <0.001 | <0.001 | <0.001 |
Table 8.
Hyperparameter search space and optimal configuration.
Table 8.
Hyperparameter search space and optimal configuration.
Hyperparameter | Search Space | Optimal Hyperparameters |
---|
Learning Rate | [1 × 10−8, 0.1] | 4.33 × 10−7 |
First Hidden Nodes | [20, 2000] | 353 |
Second Hidden Nodes | [20, 2000] | 1790 |
Batch Size | [5, 100] | 100 |
Epochs | [5, 500] | 458 |