Next Article in Journal
RPFL: A Reliable and Privacy-Preserving Framework for Federated Learning-Based IoT Malware Detection
Previous Article in Journal
Hybrid Control Strategy for DC Microgrid Against False Data Injection Attacks and Sensor Faults Based on Lagrange Extrapolation and Voltage Observer
Previous Article in Special Issue
Traffic and Vehicle Management in Roundabouts Through Systems Based on Dedicated Short-Range Communications and Visible Light Communications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Facial Features Controlled Smart Vehicle for Disabled/Elderly People

by
Yijun Hu
1,
Ruiheng Wu
1,*,
Guoquan Li
2,
Zhilong Shen
2 and
Jin Xie
1
1
Department of Electronic and Electrical Engineering, Brunel University London, London UB8 3PH, UK
2
School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(6), 1088; https://doi.org/10.3390/electronics14061088
Submission received: 2 December 2024 / Revised: 28 February 2025 / Accepted: 5 March 2025 / Published: 10 March 2025
(This article belongs to the Special Issue Active Mobility: Innovations, Technologies, and Applications)

Abstract

:
Mobility limitations due to congenital disabilities, accidents, or illnesses pose significant challenges to the daily lives of individuals with disabilities. This study presents a novel design for a multifunctional intelligent vehicle, integrating head recognition, eye-tracking, Bluetooth control, and ultrasonic obstacle avoidance to offer an innovative mobility solution. The smart vehicle supports three driving modes: (1) a nostril-based control system using MediaPipe to track displacement for movement commands, (2) an eye-tracking control system based on the Viola–Jones algorithm processed via an Arduino Nano board, and (3) a Bluetooth-assisted mode for caregiver intervention. Additionally, an ultrasonic sensor system ensures real-time obstacle detection and avoidance, enhancing user safety. Extensive experimental evaluations were conducted to validate the effectiveness of the system. The results indicate that the proposed vehicle achieves an 85% accuracy in nostril tracking, over 90% precision in eye direction detection, and efficient obstacle avoidance within a 1 m range. These findings demonstrate the robustness and reliability of the system in real-world applications. Compared to existing assistive mobility solutions, this vehicle offers non-invasive, cost-effective, and adaptable control mechanisms that cater to a diverse range of disabilities. By enhancing accessibility and promoting user independence, this research contributes to the development of inclusive mobility solutions for disabled and elderly individuals.

1. Introduction

With advances in social civilization and humanistic care, there is a growing emphasis on improving quality of life for individuals with disabilities. People with severe motor impairments face significant challenges in daily mobility, which can lead to social isolation, dependency, and reduced quality of life. Globally, the prevalence of disability is rising, and addressing these challenges has become increasingly important, particularly in disadvantaged communities where resources are limited [1].
According to the World Health Organization (WHO) [2], nearly 15% of the global population—approximately 1 billion people—live with some form of disability, and the prevalence of severe disabilities is higher in developing countries. Paralysis, caused by conditions such as strokes, accidents, or spinal cord injuries, is a common form of disability that severely limits mobility [3,4].
Traditional mobility aids, such as manual or electric wheelchairs, require either physical strength or complex hand-operated controls, making them unsuitable for users with severe motor impairments. This highlights the need for an innovative, hands-free solution that provides intuitive control mechanisms, ensuring a higher level of autonomy for disabled and elderly individuals. In response to this demand, researchers have explored various assistive technologies, including robotic wheelchairs, head-tracking systems, and eye-tracking-based control methods. However, existing solutions often suffer from limitations such as high cost, invasive sensors, and poor adaptability to diverse user needs.
The research motivation behind this study stems from the necessity of developing a cost-effective, multimodal control system that improves accessibility for individuals with different physical impairments. By integrating multiple control mechanisms, such as eye-tracking, nostril-based control, and Bluetooth-assisted operation, the proposed system aims to enhance user independence and ease of navigation.
Several challenges must be addressed in designing an effective smart wheelchair system. The high cost of implementation remains a significant barrier to accessibility, particularly for users in low-income regions. Environmental adaptability is another crucial factor, as smart mobility solutions must function efficiently in dynamic and unpredictable conditions with varying lighting, obstacles, and terrain types. User adaptability and ease of learning are also essential, as individuals with severe disabilities may require significant training to efficiently use advanced mobility control systems, necessitating a more intuitive design. Additionally, accuracy and reliability in tracking user inputs remain a concern, as technologies like eye-tracking and head movement detection often suffer from errors due to external disturbances, requiring more robust algorithms to improve precision. Battery life and power efficiency are important considerations, as frequent recharging can limit the practical usability of smart mobility devices. Safety and security risks must be minimized, ensuring that autonomous navigation meets high safety standards to prevent accidents or unintended movements, particularly in crowded spaces. Lastly, customization for individual needs is vital, as different disabilities require different levels of control and interaction, making a one-size-fits-all approach ineffective.
To address the aforementioned challenges, this study presents a novel smart vehicle system with the following contributions:
  • Multimodal Control System: The proposed system integrates three distinct, non-invasive control mechanisms, eye-tracking, nostril-based control, and Bluetooth operation, to accommodate users with varying impairments.
  • Innovative Nostril-Based Navigation: A novel method utilizing nostril movement tracking for wheelchair control, expanding accessibility for users with limited upper and lower limb mobility.
  • Enhanced Eye-Tracking System: An improved Viola–Jones-based eye-tracking algorithm that enhances precision and adaptability across different environmental conditions.
  • Bluetooth-Assisted Caregiver Support: The integration of a Bluetooth-based remote-control feature, enabling caregivers to assist users when needed, ensuring safety and reliability.
  • Real-Time Obstacle Avoidance: Implementation of ultrasonic sensors for real-time detection and avoidance of obstacles, enhancing autonomous navigation and preventing collisions.
  • Cost-Effective and Scalable Design: Utilizing commercially available components and open-source software frameworks to develop an affordable and replicable solution.
  • Optimized Energy Efficiency: The system can be designed to operate efficiently with minimal power consumption, extending battery life and ensuring prolonged usability.
By addressing these key challenges and incorporating multiple control options, this study aims to contribute to the development of a more inclusive and accessible mobility solution for individuals with disabilities.
The rest of this paper is organized as follows: Section 2 discusses related work and reviews existing solutions in assistive mobility. Section 3 and Section 4 describe the detailed methodologies and evaluation of the proposed vehicle using three different techniques. Section 5 analyzes the findings, evaluates the system’s performance, identifies future work, and concludes the study.

2. Related Work

2.1. Comparison of the Proposed Approach with the Existing Solutions

Smart wheelchairs have evolved significantly with the integration of advanced technologies such as eye-tracking, head movement detection, and machine learning. However, many of these existing solutions focus on a single modality of control, limiting adaptability for users with different disabilities.
The proposed study offers several key advantages compared to existing solutions:
  • The solution does not involve wearable navigating devices, unlike solutions such as Munevo [5], which require users to wear smart glasses or other devices attached to the head or body; the proposed wheelchair operates without external wearables and eliminates the discomfort or dependency on additional devices, making the wheelchair more inclusive for users who prefer not to or cannot use wearables. This ensures accessibility for a broader range of physical disabilities.
  • The proposed solution offers three automated control modes, including manual, semi-autonomous, and fully autonomous operation, and provides users with tailored control options suitable for different environments and personal preferences, enhancing usability and adaptability in various scenarios. This flexibility surpasses other solutions like Ability Drive [6], which focuses on single input methods, or Imperial College’s AI wheelchair [7], which emphasizes full autonomy with a complicated design.
  • While solutions like Braze Mobility [8] and Imperial College’s AI wheelchair focus on obstacle detection, the proposed wheelchair goes further by combining real-time obstacle detection with intelligent rerouting, offering safer and more efficient navigation. Users can confidently navigate complex environments.
  • Many existing solutions, such as those by Braze Mobility or Ability Drive, do not emphasize emergency management. The proposed wheelchair will integrate fall detection alerts, enhancing user safety, especially for individuals at risk of falls, offering peace of mind to users.
  • Unlike retrofitting solutions like Braze Mobility, which add functionalities to existing wheelchairs, the proposed solution will be a fully integrated system.
Table 1 shows a comparison between the proposed solution and current ones. As can be seen, the proposed system distinguishes itself by integrating multiple intuitive control options, ensuring flexibility, affordability, and ease of use while maintaining high reliability.

2.2. Preliminaries

This section consolidates key background technologies necessary for understanding the proposed system. The fundamental principles behind the three control mechanisms, nostril-based, eye-tracking-based, and Bluetooth-based control, are discussed, along with relevant algorithms and hardware implementations.
  • Eye-Tracking-Based Control
Eye-tracking technology enables users to control the intelligent vehicle through gaze direction. This study employs the Viola–Jones algorithm, a robust face detection technique, to identify eye positions and gaze movements. The detected signals are processed via an Arduino Nano board, which translates them into navigation commands. This approach provides a seamless interface for users with significant upper limb impairments, enabling effective control with minimal effort.
  • Nostril-Based Control
The nostril-based control system utilizes MediaPipe, an open-source machine learning framework, to track nostril displacement. By capturing and analyzing head movement over time, the system translates nostril displacement into directional vehicle commands, allowing hands-free navigation. This innovative approach is particularly beneficial for users with limited head or neck movement capabilities.
  • Bluetooth-Based Control
Bluetooth connectivity provides an alternative control mode, allowing caregivers or users with limited motor abilities to operate the vehicle using a smartphone or other Bluetooth-enabled devices. The integration of Bluetooth ensures accessibility and ease of use, offering an additional fail-safe mechanism in situations where eye-tracking or nostril-based control may be ineffective.
  • Ultrasonic Obstacle Avoidance
To ensure safety, the vehicle is equipped with ultrasonic sensors that detect and avoid obstacles in real-time. The system continuously scans the environment, stopping or rerouting the vehicle when obstacles are detected within a predefined range. The integration of multiple sensors enhances the precision and responsiveness of obstacle avoidance, improving overall reliability.
  • Hardware and Software Implementation
The intelligent vehicle incorporates essential hardware components to facilitate seamless operation. At the core of the system is the Arduino Nano, which processes signals from both the eye-tracking and nostril-based control systems. The MediaPipe framework plays a crucial role in enabling head movement detection and nostril tracking, providing an intuitive hands-free control mechanism. To allow for remote assistance and additional accessibility, the Bluetooth module is integrated, enabling external control through a smartphone application. Safety and navigation are enhanced through the use of ultrasonic sensors, which provide real-time obstacle detection and avoidance, ensuring a smooth and secure driving experience. MATLAB 2008 serves as the primary software platform for processing vision-based control inputs, allowing for efficient data interpretation and system responsiveness. Together, these components create a robust and adaptable system designed to improve mobility for individuals with disabilities.
  • Extended Considerations and Future Prospects
As the field of assistive technology continues to evolve, future work should explore deep learning-based improvements in vision processing, adaptive user interface enhancements, and multimodal integration of additional biometric indicators for refined control mechanisms. Further research into energy-efficient hardware solutions and cloud-based control interfaces could enhance the long-term feasibility and adoption of such intelligent mobility systems.

3. Methodology

This section outlines the methodologies employed in developing the multifunctional smart vehicle. The proposed system integrates three distinct control mechanisms—eye-tracking, nostril-based navigation, and Bluetooth-assisted operation—each designed to enhance accessibility and usability. The selection of these methods was guided by extensive research into assistive mobility technologies, ensuring a non-invasive, cost-effective, and adaptable solution.
The first section details the implementation of the Viola–Jones eye-tracking algorithm, chosen for its accuracy in detecting eye positions and gaze direction without requiring specialized hardware. The Section 2 introduces the nostril-based control system, leveraging MediaPipe’s machine learning framework to track nostril displacement for intuitive navigation. Finally, the Bluetooth-assisted mode provides an alternative control method, enabling remote operation for caregivers and users with varying levels of mobility. Additionally, the integration of ultrasonic sensors for real-time obstacle detection ensures enhanced safety and reliability.
This chapter further discusses the hardware and software configurations, algorithmic approaches, and experimental setup used to evaluate the effectiveness of the proposed system. The methodologies presented here lay the foundation for assessing the system’s performance in real-world scenarios, ensuring its feasibility as an accessible mobility aid.

3.1. Eye-Tracking Control Technologies Based on Viola–Jones Eye Detection Algorithm

One of the core features of the multifunctional smart vehicle proposed in this paper is its eye-tracking control technology. During the initial stages of design, the author faced challenges in selecting the most suitable eye-tracking technology for integration into the intelligent vehicle, as well as identifying the technology best suited to assist individuals with vehicle operation. To address this, an extensive review of the literature on eye-tracking technologies was conducted. This review proved instrumental in determining the most appropriate eye-tracking technology, ultimately justifying the selection of the Viola–Jones eye detection algorithm.

3.1.1. History of the Algorithm

In 2001, Paul Viola and Michael Jones initially proposed the Viola–Jones algorithm as a framework aimed at achieving high detection rates while providing a rapid solution for image processing. The primary objective of this framework is to generate an integral image for the detector, which demands significant computational power [9]. The Viola–Jones technique offers excellent detection accuracy while minimizing computational time in object detection tasks [10].
This paper reviews various studies related to face identification using the Viola–Jones algorithm. Numerous researchers have employed face patterns as samples and training data to refine the algorithm. For instance, Fatima [11] conducted research on the Viola–Jones algorithm, classifying sleep-deprived drivers based on their eye conditions and facial expressions. The test achieved accuracies of 95.4% and 96.5% when combined with SVM (Support Vector Machine) and Ad boost approaches, respectively. Alyushin [12] improved the algorithm’s performance by a factor of 2.5 by identifying the minimal size of the integral image and selecting optimal Haar features. Similarly, Lu [13] utilized composite features to enhance the effectiveness of the Viola–Jones algorithm, aiming to reduce the error rate in face identification.
In another study, Liliana Enciso-Quispe [14] integrated the Internet of Things (IoT) concept with the Viola–Jones algorithm to improve bus services through enhanced facial recognition. Satyanarayana [15] advanced the algorithm’s performance by incorporating skin mapping, segmentation, and the Viola–Jones method, significantly reducing error values in face recognition. Rattakorn [16] applied the Viola–Jones algorithm with skin colour segmentation to develop an efficient expert system for driver training.
To address driver fatigue, Sankaran [17] and Rajendran [18] combined the Viola–Jones algorithm with the Percentage of Eyes Closed method, successfully identifying drowsiness to reduce accident risks. Similarly, Manjula [19] integrated the Viola–Jones algorithm with a Convolutional Neural Network (CNN) to achieve similar objectives. Jahnavi [20] employed pre-recorded facial recognition data to design an automated door access control system. Sanabria-Macías [21] developed a successful face tracking system using the Viola–Jones algorithm, achieving strong results for detecting faces in both frontal and lateral postures. Kirana [22] employed the Hill Climbing algorithm to minimize redundancy, improving accuracy from 77% to 85%. In Sriman’s work [23], the Directed Gradient Histogram was compared with the Viola–Jones method for face detection. The study concluded that, in addition to face detection, it is possible to recognize a person’s emotions. Joshi [24] applied the Viola–Jones algorithm to detect facial characteristics, particularly focusing on the mouth. The accuracy of detecting facial features improved with an increase in the threshold value.

3.1.2. Description of Viola–Jones Algorithm

The procedural steps of the Viola–Jones algorithm are as follows: The process begins by converting the input image to grayscale. Next, the cumulative sum of pixel values in the image is computed. This cumulative data are then used to create an integral image [25], which represents the average intensity associated with Haar features—a key component of the Viola–Jones method.
Figure 1 shows how the total pixel values are calculated, where (A) and (B): n = 2; (C): n = 3; and (D): n = 4.
The feature in the dark areas is subtracted from the total of all the values in the light parts to get F (Haar) [26]. Next, the Viola–Jones Algorithm examines a sub-window picture to determine whether the features match. A feature consists of two or more rectangles. Figure 2 displays some kinds of rectangle characteristics.
The Viola–Jones detector evaluates more than 160,000 characteristics to determine whether a face is visible in a specific sub-window using a learning technique known as AdaBoost. These characteristics are sometimes referred to as weak classifiers. To manage the large number of features, the AdaBoost technique is employed to categorize and eliminate some of the learned characteristics. The method works by combining weights from multiple weak classifiers [27]. AdaBoost reduces the vast number of characteristics to a smaller set by selecting the most effective features from all available facial objects [28].
The final stage of this process is the cascade classifier. this classifier ensures accurate results and accelerates the face recognition process. It consists of several stages, each grouping relevant features to perform a robust classification. Additionally, it identifies faces by enclosing objects in the image [29]. By rapidly identifying negative samples, this algorithm achieves a high detection rate. As a result, it is particularly effective in removing numerous sub-windows in subsequent stages of the classification process, which is not a continuous operation [30]. The cascade classification procedure builds on earlier AdaBoost training [31]. A block diagram of the cascade classifier is presented in Figure 3 [32].

3.1.3. Advantages of Using the Viola–Jones Algorithm for Eye-Tracking

After thoroughly evaluating the three eye-tracking techniques, the Viola–Jones algorithm was ultimately selected for eye-tracking in this paper. The corneal imaging technique imposes stringent requirements on the experimental environment and equipment. For instance, it demands a highly stable light source, a high-resolution infrared camera, and a high-precision image processing system to perform eye-tracking effectively. The corneal imaging technique, meanwhile, requires the use of a specialized head-mounted corneal imaging camera, which is expensive, challenging to operate, and demands a certain level of expertise.
In comparison with the pupillary corneal reflex method and the corneal imaging technique, the Viola–Jones algorithm is noticeably easier to implement for eye-tracking. It is the only method that does not require an infrared light source or scanning glasses near the eye. Instead, a standard camera records a video of the eye, which is processed by a computer. This approach is non-invasive, causing no harm or interference to the eyes. Moreover, the cost of a standard camera is significantly lower than that of an eye-tracker, head-mounted corneal imaging camera, or infrared camera.
Overall, using the Viola–Jones algorithm for eye-tracking is more economical and environmentally friendly compared to other methods. It requires only a computer camera and standard MATLAB software to perform the eye-tracking tasks, aligning well with the technical objectives while also being more beneficial to human health.

3.2. Nostril-Controlled Technology

3.2.1. Motivation for Using Nostril-Controlled Technique

At the start of the research work, the original plan was to focus on studying eyeball control for a vehicle. However, during the experiments on eye recognition using the Viola–Jones algorithm, it was observed that the algorithm occasionally misidentified nostrils as eyeballs, as illustrated in Figure 4 below. This misclassification reduced the accuracy of the algorithm in recognizing eyeballs.
However, an inspiration was drawn that the research could be shifted from controlling the vehicle with the eyes to controlling it with the nostrils. This approach could broaden the potential user base and provide an alternative solution for individuals with both eye and mobility impairments. Inspired by this idea, nostril-controlled head recognition was explored as a new direction for the design.

3.2.2. Description of the Nostril-Controlled Technique

Embarking on this head recognition research was akin to navigating uncharted territory, with no predefined roadmap to follow. The initial phase was dedicated to establishing a robust research methodology. After an intensive exploration of possibilities, two viable approaches emerged. The first relied on traditional computer vision algorithms, which required a comprehensive understanding of digital image processing. This approach involved identifying image features and designing methods to detect and track them. The second approach utilized contemporary neural network models, which automatically identify facial feature points and output the required information. Ultimately, the latter approach was chosen for the project. Neural networks offered a more efficient and streamlined solution, allowing us to focus more on conducting experiments and refining advanced machine learning techniques, accelerating the research process and enhancing its outcomes.
MediaPipe, a multimedia machine learning framework developed and open-sourced by Google, was adopted. MediaPipe holds a prominent position in the field of human gesture recognition and offers extensive processing libraries for common multimedia tasks. These include computer vision tasks (e.g., face detection, pose estimation, and object tracking), audio processing (e.g., sound enhancement and speech recognition), and image processing, which generates a total of 543 feature points with 33 pose feature points, 468 facial feature points, and 21 hand feature points per hand.
The core framework of MediaPipe is implemented in C++, with support for languages such as Java and Objective-C. Its key concepts include the following:
  • Packet: The basic unit of data, representing information at a specific point in time (e.g., a video frame or a small audio signal).
  • Stream: A sequence of packets arranged in ascending chronological order, where at most one packet exists at a given timestamp.
  • Graph: A directed structure through which streams flow, consisting of computational units called Calculators. Streams originate from a Source Calculator or Graph Input Stream, flow through the graph, and exit at the Sink Calculator or Graph Output Stream.
This paper implemented a nostril-based control system using MediaPipe’s human posture recognition capabilities. The system operated by first activating the camera through MediaPipe to accurately detect nostril feature points and transmit their relative coordinates in real-time. Over a specified time interval, such as two seconds, the system calculated the horizontal displacement of the nose by subtracting the initial horizontal coordinate from the final coordinate. A positive difference indicated rightward movement, while a negative difference indicated leftward movement. Based on the direction of the nose’s movement, commands were issued to control a vehicle to move either right or left. This system effectively demonstrated the feasibility of using nostril-based gestures to control vehicle’s movement, leveraging the precision and versatility of MediaPipe.

3.2.3. Operation Steps

To begin the experimental setup, MediaPipe was installed and configured. As a pre-requisite, Anaconda was installed to provide a robust Python 3.8 environment. The installation process involved the following steps: open the Start Menu, navigate to Anaconda, and select and launch the Anaconda Navigator. The installation and setup were confirmed to be successful, ensuring compatibility with MediaPipe and other dependencies.
Then, the algorithm for recognizing nostrils in MediaPipe starts to run and gives the relative displacement coordinates of the nose at each moment. Then, we set a time interval of 2 s, added all the Delta Coordinates within this period, and obtained a new coordinate, recorded as (x, y). The coordinate reflects the displacement of the nostril in the plane over two seconds. To facilitate the experiment, we prepared four light bulbs. These bulbs represent vehicle functions: forward movement, reverse movement, left turn, and right turn. Each bulb indicates that the vehicle will perform the corresponding movement.
The establishment of a rectangular coordinate system can be described as follows:
If x > 0 and y > 0 (northeast):
The bulbs controlling forward movement and right turn light up, while the other bulbs remain off.
If x > 0 and y < 0 (southeast):
The bulbs controlling reverse movement and right turn light up, while the other bulbs remain off.
If x < 0 and y > 0 (northwest):
The bulbs controlling forward movement and left turn light up, while the other bulbs remain off.
If x < 0 and y < 0 (southwest):
The bulbs controlling reverse movement and left turn light up, while the other bulbs remain off.
Additional rules:
  • The x-coordinate controls left and right turns.
  • The y-coordinate controls forward and backward movements.
  • If x = 0 , the left and right turns remain off.
  • If y = 0 , the forward and backward movements remain off.

3.3. Bluetooth Controlled Technology

In this mode, the smart vehicle can be operated remotely using a Bluetooth-enabled device. This feature is particularly beneficial for wheelchair users, as it allows for hands-free operation, reducing the need for physical effort and enhancing overall accessibility. By enabling wireless control, the system provides greater independence, allowing users to navigate the vehicle effortlessly without direct physical interaction. Additionally, caregivers or family members can assist remotely, ensuring a safer and more convenient experience. The seamless Bluetooth connectivity eliminates the need for complex wiring or mechanical controls, making the interface user-friendly and adaptable to individual needs. In case of any difficulties, an external controller can quickly take over, enhancing safety and reliability. Furthermore, the system can be developed to support customizable controls, such as voice commands or gesture-based inputs, offering a more personalized and intuitive driving experience for disabled and elderly individuals.

4. Experimental Evaluation

To assess the performance and reliability of the proposed smart vehicle, a series of experimental evaluations were conducted. A total of 14 vehicles were built and tested to validate the effectiveness of the system. The three control modes, nostril-tracking-based control, eye-direction-based control, and Bluetooth-based control, were demonstrated. Preliminary results showed the robustness of the system, achieving approximately 85% accuracy in nostril tracking and over 90% precision in eye direction detection. Additionally, the vehicle’s obstacle avoidance system proved highly effective, successfully detecting and manoeuvring around obstacles within a 1 m range. These findings highlight the system’s potential as a reliable and user-friendly mobility solution.

4.1. Eye-Controlled Vehicle

4.1.1. Software: The Core Code of the Viola–Jones Algorithm

The Viola–Jones algorithm forms the basis of the eye-tracking functionality. Once the pupil of the eye is identified, the algorithm sets the center point of the pupil as the reference. In the eye-detection map, when the eye is looking straight ahead, the distance from the center to the left edge (disL) is 16 units greater than the distance from the center to the right edge (disR). If disL > disR + 16, the eye is looking to the right. If disL < disR, the eye is looking to the left. This logic arises because, when the eye is looking straight or to the right, disL is always greater than disR. Consequently, when disL < disR, the eye is looking to the left.

4.1.2. Hardware Design

In order to demonstrate the eye-tracking technique, a model vehicle using experimental cart was implemented and tested as described below.
The computer used in this experiment was the ROG Strix7 Plus laptop. This computer was chosen due to its high-performance specifications, which are capable of meeting all experimental requirements. MATLAB 2008 was used as the primary software for programming and controlling the circuits involved in the experiment.
The vehicle models utilized wheelbarrow kits featuring two motors and three/four wheels. The kits include a battery compartment that accommodates four/two AAA batteries for motor operation. The three-wheel multifunctional vehicle under natural stationary conditions is shown in Figure 5 (with the ultrasonic obstacle detection system).
Figure 6 presents the block diagram of the control circuit. The USB 2.0 HD UVC webcam is connected to the computer via a USB cable, enabling real-time image processing through MATLAB 2008. MATLAB communicates with the Arduino Nano development board, which is also connected to the computer via a USB cable. The Arduino Nano is mounted on the multifunctional vehicle and interfaces with the L293D motor driver module through wired connections. The L293D motor driver module controls the left and right motors by delivering directional signals and power. Additionally, the power supply is directly connected to the L293D motor driver module to ensure a stable power source for motor operation.
Figure 7 illustrates the control flowchart. The deliverables for controlling the vehicle using eye-tracking with the Viola–Jones algorithm are outlined as follows:
The camera detects the author’s eye position, closely aligning with the “looking straight ahead” reference image utilized in the code. The Viola–Jones algorithm interprets this as the eyes looking straight ahead. Consequently, MATLAB sends a command to the vehicle to move forward, and the vehicle advances on both left and right wheels as shown in Figure 8. Similarly, as shown in Figure 9, the camera detects that the author’s eye position closely resembles the “looking to the left” reference image used in the code. The Viola–Jones algorithm determines that the eyes are directed to the left, prompting MATLAB to send a right-turn command to the vehicle. In response, the right wheels move forward while the left wheels follow, enabling the turn. The process for turning right mirrors this procedure.
In summary, the operation steps are as follows: First, connect the Arduino Nano development board to the computer. Next, open MATLAB and run the relevant code. After the system initializes, eye movements can be used to control the vehicle: gaze forward to move straight, look left to turn left, and look right to turn right. This intuitive control mechanism ensures smooth and effortless operation, aligning with user mobility needs.

4.2. Bluetooth Controlled Vehicle

4.2.1. Hardware and Software Selection

The HC-05 Bluetooth module was selected to establish the Bluetooth connection. The HC-05 is sensitive, easy to develop, and cost-effective. In order to use this module, the user must enter a special command mode during the device’s startup.
The XCOM V2.0 Bluetooth serial port debugging software was chosen. XCOM V2.0 is a robust tool that facilitates debugging and monitoring of serial port communication; it supports various serial communication protocols, including RS232, RS485, and RS422, enabling connections to a wide range of serial devices such as computers.
Embedded XCOM offers an intuitive and user-friendly interface, allowing users to send and receive data, monitor the communication status, and analyze or process data efficiently. Additionally, it supports data display in both hexadecimal and ASCII formats, simplifying the viewing and editing of transmitted and received data.

4.2.2. Operation Process

To operate the vehicle using the Bluetooth control method, begin by establishing a Bluetooth connection between the computer and the vehicle. Once the Bluetooth link is active, connect the computer to the vehicle using a USB cable and take note of the serial port assigned to this connection. After confirming the USB connection, launch the XCOM V2.0 Bluetooth serial port debugging tool and select the same serial port identified for the USB connection in the serial port selection column to ensure proper communication. Once the Bluetooth pairing is successfully completed, the system is ready for control. At this stage, use the multiple transmission column in XCOM V2.0 to send numeric commands ranging from “0” to “5” with each number corresponding to specific Bluetooth signals that control different functions of the smart vehicle. This process enables smooth and responsive Bluetooth-based operation of the vehicle.
The numeric commands in the multiple sending interface of XCOM V2.0 are as follows:
  • Click on “0”: Sends the forward command to the vehicle.
  • Click on “1”: Sends the backward command.
  • Click on “2”: Controls the vehicle to turn right.
  • Click on “3”: Controls the vehicle to turn left.
  • Click on “4”: Stops the vehicle.
  • Click on “5”: Activates the ultrasonic sensing system for obstacle detection and avoidance.

4.3. Nostril Control Function

The vehicle also incorporates nostril-based control, implemented using the MediaPipe algorithm. The core code for this functionality operates by first calling the nose tip tracker module to identify the coordinates of the nose tip. Next, the absolute coordinates of the nose tip are calculated. The algorithm then computes the difference between the absolute coordinates in the current and previous frames on a frame-by-frame basis. Finally, the updated coordinates are displayed on the image, and the previous frame’s nose tip coordinates are adjusted accordingly.
This process ensures precise tracking of nostril movements for vehicle control. Figure 10 illustrates the vehicle’s movement controlled by nostril movements.

4.4. Ultrasonic Obstacle Avoidance Function

The vehicle is equipped with an obstacle avoidance system using ultrasonic sensors to enhance its practical application. The system operates as follows:
The vehicle continuously performs real-time ultrasonic distance measurements, maintaining a safe distance of 16 cm, which can be adjusted based on actual requirements. If the measured distance falls below 16 cm, the vehicle stops moving.
The obstacle avoidance mechanism initiates with a rightward head shake to measure the distance, followed by a return to the central position; it then performs a leftward head shake for another distance measurement before returning to the center. Each head-shaking movement includes a 500 ms delay.
The system compares the distances measured during the left and right head-shaking movements. If the distance on the left is greater than on the right, the vehicle moves left by activating the right motor. Conversely, if the distance on the right is greater than on the left, the vehicle moves right by activating the left motor. The movement duration for these directional adjustments is set to 400 ms.
If no obstacle is detected in front, and the measured distance is significantly greater than the safe distance, the vehicle continues moving straight forward.
This head-shaking obstacle avoidance mechanism ensures that the vehicle can navigate safely and dynamically.

5. Conclusions

The development of the multifunctional intelligent vehicle demonstrates significant advancements in assistive mobility technology for individuals with disabilities. This section interprets the results, discusses their implications, addresses limitations, and outlines future directions.

5.1. Achieved Objectives and Advantages of the Proposed Technology

Disabled individuals often face significant barriers to accessing transportation, which greatly impacts their independence and social inclusion. Our research aims to bridge this gap by introducing a multifunctional intelligent vehicle (wheelchair) that incorporates innovative technologies to enhance mobility and autonomy. This vehicle integrates head movement control, eye-tracking, Bluetooth connectivity, and ultrasonic obstacle avoidance, empowering users with diverse mobility impairments to navigate transport systems safely and independently. The research addresses the critical need for practical, inclusive, and accessible mobility/transportation solutions.
This intelligent vehicle introduces innovative advancements that distinguish it from existing solutions. Unlike conventional designs that depend on wearable devices or physical attachments, this vehicle offers three distinct non-wearable control methods, ensuring greater comfort and ease of use. The design eliminates the need for invasive technologies such as head-mounted corneal imaging cameras or specialized glasses.
Additionally, it features advanced automation that minimizes manual input, making it especially beneficial for users with severe mobility limitations. Equipped with ultrasonic sensors for real-time obstacle detection and avoidance, the vehicle ensures safety and reliability across various environments.
For users with limited limb mobility, head movement control enables hands-free operation. Utilizing the MediaPipe framework, the system detects and interprets head movements by tracking nostril displacement, providing an intuitive and efficient control method. For individuals with restricted head or arm mobility, eye-tracking navigation powered by the Viola–Jones algorithm—processed via an Arduino Nano board and MATLAB—allows gaze-based operation, enabling users to navigate simply by shifting their gaze.
Moreover, the vehicle integrates a Bluetooth interface, allowing control through smartphones or other external devices. When combined with ultrasonic sensors, this feature ensures safe navigation even in crowded or confined spaces.
It is widely recognized that corneal imaging requires a stable light source, a high-resolution infrared camera, and a precise image processing system—components that can be both expensive and complex to implement. However, the test results have confirmed that our system operates effectively under natural light, thanks to improved algorithms and integrated technical features.
The system is intuitive to operate, requiring minimal training—especially in Bluetooth control mode. By leveraging open-source frameworks like MediaPipe and cost-effective hardware such as Arduino, the vehicle remains financially accessible, promoting widespread adoption. Also, its modular design allows for straightforward assembly and easy replication, making it well suited for large-scale production.

5.2. Limitations, Challenges, and Future Work

Although promising results have been achieved, the work presented in this paper is still in its early stage. To further enhance the system’s functionality and facilitate its application in real-world scenarios, several avenues for improvement could be considered:
  • Advanced Algorithms and Tracking Improvements:
Although the Viola–Jones algorithm works well for the tasks in this research, its accuracy and adaptability could be improved. Future work could explore improving or replacing this algorithm with state-of-the-art deep learning architectures, such as convolutional neural networks (CNNs) or Transformer-based frameworks [33] These advanced models are known for their superior feature extraction capabilities and robustness in complex environments. Additionally, integrating multi-point nostril tracking could improve the precision of motion trajectory estimation. This enhancement is expected to reduce tracking errors and increase the reliability of the system across diverse use cases in practical applications.
  • Quantitative Performance Comparisons:
Comprehensive quantitative analyses are essential to validate the proposed enhancements. Systematic comparisons of the algorithms used in this research with alternative methods, such as deep learning-based approaches, would provide valuable insights into the relative advantages and trade-offs. Key performance metrics, including accuracy, latency, and computational efficiency, should be rigorously assessed to demonstrate measurable improvements.
  • Environmental Testing:
The system’s performance in real-world conditions remains a critical area for evaluation. Future research should conduct extensive testing in diverse environments, encompassing variations in lighting conditions, obstacle densities, and other situational complexities. To improve responsiveness in dynamic settings, increasing the frequency of sensor measurements or deploying multiple ultrasonic sensors in tandem may prove beneficial.
  • Expanding Input Modalities:
Exploring alternative interaction modalities could enhance the system’s accessibility and user experience. Integrating voice commands or gesture recognition would provide non-invasive, intuitive control options, particularly for users with limited mobility. These modalities, combined with the existing tracking system, could offer a multimodal interface, catering to a wide range of user preferences and enhancing overall comfort and engagement.
  • Energy Efficiency and Sustainability:
For future vehicle development, optimizing energy consumption is imperative. Research into low-power circuit designs and smart power management strategies, such as adaptive duty cycling or energy harvesting techniques, could significantly extend battery life. These advancements would contribute to the system’s sustainability and usability in prolonged, untethered applications.
In addition, to ensure the system is inclusive and user-friendly, the future vehicle development could involve participants from diverse demographic and physiological backgrounds, including variations in age, levels of disability, and familiarity with assistive technologies. This approach would help assess the system’s usability, adaptability, and comfort, providing insights into design refinements that accommodate a broader spectrum of user needs.

5.3. Personalization and Mass Production

In future developments, the system’s sensitivity parameters can be optimized to cater to individual user needs, enhancing adaptability for a diverse user base. Additionally, modular and open-source design could offer significant potential for scalability, supporting efficient mass production. Strategic collaborations with assistive technology manufacturers could further streamline the production process and expand distribution channels, ensuring broader accessibility and market reach.

5.4. Scientific Evaluation and Standardization

To ensure the system’s reliability and practical applicability, implementing rigorous evaluation methods will be essential. This involves establishing standardized experiments with consistent environmental conditions, methodologies, and evaluation criteria to provide a reliable basis for performance assessment. Additionally, the use of comprehensive quantitative metrics—such as accuracy, response time, and fault tolerance—will offer a thorough evaluation of the system’s effectiveness. Detailed documentation, including diagrams and technical schematics, should also be provided to clearly illustrate the system’s mechanisms and data flow, ensuring clarity and reproducibility.
This study developed and demonstrated innovative eye-tracking and nostril-recognition control technologies, significantly expanding the potential applications of assistive devices for the disabled and elderly communities. By offering a non-invasive and multifunctional mobility solution, the proposed smart vehicle (wheelchair) provides a transformative option for individuals with severe physical impairments. The ability of the vehicle to seamlessly switch between three distinct driving modes highlights its versatility, practicality, and adaptability to user needs. This novel approach can introduce a new paradigm in mobility assistance, empowering users with greater independence, convenience, and accessibility in their daily lives.

Author Contributions

Conceptualization, R.W.; methodology, R.W., G.L., Y.H. and J.X.; software, Y.H., J.X. and Z.S.; validation, R.W., G.L., Y.H., Z.S. and .X.; formal analysis, R.W., G.L., Y.H., J.X. and Z.S.; investigation, R.W., G.L., Y.H., J.X. and Z.S.; resources, R.W. and G.L.; data curation, R.W., G.L., Y.H., Z.S. and J.X.; writing—original draft preparation, Y.H. and R.W.; writing—review and editing, R.W.; visualization, Y.H., Z.S. and J.X.; supervision, R.W. and G.L.; project administration, R.W. and G.L.; funding acquisition, R.W. and G.L. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Brunel University London Publications Fund and was supported by the National Natural Science Foundation of China (Grant No. U21A20447).

Institutional Review Board Statement

Due to the nature of this project, the research involved no risk or very low risk. In accordance with the regulations of Brunel University London, we confirm that approval from the Ethics Committee or Institutional Review Board is not required for our manuscript.

Informed Consent Statement

I can confirm that informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Health Organization. International Classification of Functioning, Disability and Health (ICF). 2021. Available online: https://www.who.int/classifications/international-classification-of-functioning-disability-and-health (accessed on 11 July 2007).
  2. World Report on Disability. 2011. Available online: https://www.who.int/teams/noncommunicable-diseases/sensory-functions-disability-and-rehabilitation/world-report-on-disability (accessed on 14 December 2011).
  3. Cleveland Clinic. Paralysis: What Is It, Diagnosis, Management, Prevention. 2021. Available online: https://my.clevelandclinic.org/health/diseases/15345-paralysis (accessed on 4 May 2021).
  4. WebMD. Paralysis—Types of Paralysis and Their Causes. 2021. Available online: https://www.webmd.com/brain/paralysis-types (accessed on 8 April 2021).
  5. Available online: https://munevo.com/en (accessed on 29 November 2024).
  6. Available online: https://www.rahanalife.co.uk/ability-drive (accessed on 15 March 2023).
  7. Available online: https://www.imperial.ac.uk/news/185712/self-driving-ai-wheelchair-edges-closer-aiding/ (accessed on 12 April 2018).
  8. Available online: https://brazemobility.com/ (accessed on 12 January 2020).
  9. Nitschke, C.; Nakazawa, A.; Takemura, H. Corneal imaging revisited: An overview of corneal reflection analysis and applications. IPSJ Trans. Comput. Vis. Appl. 2013, 5, 1–18. [Google Scholar] [CrossRef]
  10. Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; Volome 1, p. I. [Google Scholar]
  11. Fatima, B.; Shahid, A.R.; Ziauddin, S.; Safi, A.A.; Ramzan, H. Driver fatigue detection using viola jones and principal component analysis. Appl. Artif. Intell. 2020, 34, 456–483. [Google Scholar] [CrossRef]
  12. Alyushin, M.V.; Lyubshov, A.A. The Viola–Jones algorithm performance enhancement for a person’s face recognition task in the long-wave infrared radiation range. In Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), Moscow, Russia, 29 January–1 February 2018; pp. 1813–1816. [Google Scholar]
  13. Lu, W.Y.; Ming, Y. Face detection based on Viola–Jones algorithm applying composite features. In Proceedings of the 2019 International Conference on Robots & Intelligent System (ICRIS), Haikou, China, 15–16 June 2019; pp. 82–85. [Google Scholar]
  14. Enciso-Quispe, L.; Barba-Guaman, L.; Sanchez, J.; Quezada-Sarmiento, P.A. Simulation of people counter for public service buses of Loja with IoT concept applying the Viola–Jones algorithm. In Proceedings of the 2018 13th Iberian Conference on Information Systems and Technologies (CISTI), Caceres, Spain, 13–16 June 2018; pp. 1–6. [Google Scholar]
  15. Satyanarayana, P.; Jaya Devi, N.; Sri Hasitha, S.; Sesha Sai, M. An Enhanced Viola–Jones Face Detection Method with Skin Mapping & Segmentation. In Artificial Intelligence and Evolutionary Computations in Engineering Systems, Proceedings of the ICAIECES 2017, Madanapalle, India, 27–29 April 2017; Springer: Singapore, 2018; pp. 485–493. [Google Scholar]
  16. Rattakorn, S.; Chunyue, S.; Yu’an, Y. Design and Implementation of Driver Training Expert System based on Driving Posture Recognition. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 145–150. [Google Scholar]
  17. Sankaran, K.S.; Vasudevan, N.; Nagarajan, V. Driver drowsiness detection using percentage eye closure method. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020; pp. 1422–1425. [Google Scholar]
  18. Anitha, J.; Mani, G.; Venkata Rao, K. Driver drowsiness detection using viola jones algorithm. In Smart Intelligent Computing and Applications: Proceedings of the Third International Conference on Smart Computing and Informatics, Volume 1; Springer: Singapore, 2020; pp. 583–592. [Google Scholar]
  19. Manjula, P.; Adarsh, S.; Ramachandran, K. Driver inattention monitoring system based on the orientation of the face using convolutional neural network. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; pp. 1–7. [Google Scholar]
  20. Jahnavi, S.; Nandini, C. Smart anti-theft door locking system. In Proceedings of the 2019 1st International Conference on Advanced Technologies in Intelligent Control, Environment, Computing & Communication Engineering (ICATIECE), Bangalore, India, 19–20 March 2019; pp. 205–208. [Google Scholar]
  21. Sanabria-Macías, F.; Romera, M.M.; Macías-Guarasa, J.; Pizarro, D.; Turnes, J.N.; Reyes, E.J.M. Face tracking with a probabilistic Viola and Jones face detector. In Proceedings of the 45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 14–17 October 2019; Volume 1, pp. 5616–5621. [Google Scholar] [CrossRef]
  22. Kirana, K.C.; Wibawanto, S.; Herwanto, H.W. Redundancy Reduction in Face Detection of Viola–Jones using the Hill Climbing Algorithm. In Proceedings of the 2020 4th International Conference on Vocational Education and Training (ICOVET), Malang, Indonesia, 19 September 2020; pp. 139–143. [Google Scholar]
  23. Sriman, K.P.; Kumar, P.R.; Naveen, A.; Kumar, R.S. Comparison of Paul Viola–Michael Jones algorithm and HOG algorithm for Face Detection. In IOP Conference Series: Materials Science and Engineering, Proceedings of the First International Conference on Circuits, Signals, Systems and Securities (ICCSSS 2020), Tamil Nadu, India, 11–12 December 2020; IOP Publishing: Bristol, UK, 2021; Volume 1084, p. 012014. [Google Scholar]
  24. Joshi, A.; Chavan, V.; Kaveri, P. Semantic gap reduction from mouth feature threshold value using viola jones algorithm. In IOP Conference Series: Materials Science and Engineering, Proceedings of the 1st International Conference on Computational Research and Data Analytics (ICCRDA 2020), Rajpura, India, 24 October 2020; IOP Publishing: Bristol, UK, 2021; Volume 1022, p. 012065. [Google Scholar]
  25. Jensen, O.H. Implementing the Viola–Jones Face Detection Algorithm. Master’s Thesis, Technical University of Denmark-DTU, Lyngby, Denmark, 2008. [Google Scholar]
  26. Damanik, R.R.; Sitanggang, D.; Pasaribu, H.; Siagian, H.; Gulo, F. An application of viola jones method for face recognition for absence process efficiency. In Journal of Physics: Conference Series, Proceedings of the International Conference on Mechanical, Electronics, Computer, and Industrial Technology, Prima, Indonesia, 6–8 December 2017; IOP Publishing: Bristol, UK, 2018; Volume 1007, p. 012013. [Google Scholar]
  27. Mondal, S.K.; Mukhopadhyay, I.; Dutta, S. Review and comparison of face detection techniques. In International Ethical Hacking Conference 2019, Proceedings of the eHaCON 2019, Kolkata, India, 17–25 August 2019; Springer: Singapore, 2020; pp. 3–14. [Google Scholar]
  28. Chaudhari, M.N.; Deshmukh, M.; Ramrakhiani, G.; Parvatikar, R. Face detection using viola jones algorithm and neural networks. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar]
  29. Dabhi, M.K.; Pancholi, B.K. Face detection system based on Viola–Jones algorithm. Int. J. Sci. Res. (IJSR) 2016, 5, 62–64. [Google Scholar]
  30. Srivastava, A.; Mane, S.; Shah, A.; Shrivastava, N.; Thakare, B. A survey of face detection algorithms. In Proceedings of the International Conference on Inventive Systems and Control, Coimbatore, India, 19–20 January 2017; pp. 1–4. [Google Scholar] [CrossRef]
  31. Hardjono, B.; Tjahyadi, H.; Rhizma, M.G.; Widjaja, A.E.; Kondorura, R.; Halim, A.M. Vehicle counting quantitative comparison using background subtraction, viola jones and deep learning methods. In Proceedings of the 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 1–3 November 2018; pp. 556–562. [Google Scholar]
  32. Zafeiriou, S.; Zhang, C.; Zhang, Z. A survey on face detection in the wild: Past, present and future. Comput. Vis. Image Underst. 2015, 138, 1–24. [Google Scholar] [CrossRef]
  33. Umirzakova, S.; Ahmad, S.; Mardieva, S.; Muksimova, S.; Whangbo, T. Deep learning-driven diagnosis: A multi-task approach for segmenting stroke and Bell’s palsy. Pattern Recognit. 2023, 144, 109866. [Google Scholar] [CrossRef]
Figure 1. Calculation of pixel value totals.
Figure 1. Calculation of pixel value totals.
Electronics 14 01088 g001
Figure 2. Original rectangle feature [26].
Figure 2. Original rectangle feature [26].
Electronics 14 01088 g002
Figure 3. Cascade classifier block diagram [32].
Figure 3. Cascade classifier block diagram [32].
Electronics 14 01088 g003
Figure 4. Photographs of eyeballs incorrectly identified as nostrils.
Figure 4. Photographs of eyeballs incorrectly identified as nostrils.
Electronics 14 01088 g004
Figure 5. Multifunctional intelligent vehicle under natural stationary conditions.
Figure 5. Multifunctional intelligent vehicle under natural stationary conditions.
Electronics 14 01088 g005
Figure 6. Block diagram of control circuit.
Figure 6. Block diagram of control circuit.
Electronics 14 01088 g006
Figure 7. Control flowchart.
Figure 7. Control flowchart.
Electronics 14 01088 g007
Figure 8. Demonstration of smart vehicle straight line effect. (a) The eyes look straight ahead. (b) The vehicle moves forward.
Figure 8. Demonstration of smart vehicle straight line effect. (a) The eyes look straight ahead. (b) The vehicle moves forward.
Electronics 14 01088 g008
Figure 9. Vehicle’s turning left effect controlled by eyeball movement. (a) The eyes look to the left. (b) The vehicle is making a left turn.
Figure 9. Vehicle’s turning left effect controlled by eyeball movement. (a) The eyes look to the left. (b) The vehicle is making a left turn.
Electronics 14 01088 g009
Figure 10. The vehicle’s right-turning motion is controlled by the movement of the nostrils.
Figure 10. The vehicle’s right-turning motion is controlled by the movement of the nostrils.
Electronics 14 01088 g010
Table 1. Comparison of proposed solution and existing solutions.
Table 1. Comparison of proposed solution and existing solutions.
Company/
Organisation
Key FeaturesLimitations Compared to Our Solution
Munevo DriveUses smart glasses for wheelchair controlRequires wearable device, not suitable for all users
Ability DriveEye-tracking navigation for wheelchair controlLacks multi-modal (head Bluetooth) control options
Imperial College AI WheelchairFull AI-driven navigationHigh cost, complex design, limited customization
Braze MobilityObstacle detection sensors for retrofittingOnly adds sensors to existing wheelchairs, no integrated smart control
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, Y.; Wu, R.; Li, G.; Shen, Z.; Xie, J. Facial Features Controlled Smart Vehicle for Disabled/Elderly People. Electronics 2025, 14, 1088. https://doi.org/10.3390/electronics14061088

AMA Style

Hu Y, Wu R, Li G, Shen Z, Xie J. Facial Features Controlled Smart Vehicle for Disabled/Elderly People. Electronics. 2025; 14(6):1088. https://doi.org/10.3390/electronics14061088

Chicago/Turabian Style

Hu, Yijun, Ruiheng Wu, Guoquan Li, Zhilong Shen, and Jin Xie. 2025. "Facial Features Controlled Smart Vehicle for Disabled/Elderly People" Electronics 14, no. 6: 1088. https://doi.org/10.3390/electronics14061088

APA Style

Hu, Y., Wu, R., Li, G., Shen, Z., & Xie, J. (2025). Facial Features Controlled Smart Vehicle for Disabled/Elderly People. Electronics, 14(6), 1088. https://doi.org/10.3390/electronics14061088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop