1. Introduction
Humans have at their disposal several sensory motor channels to perceive the environment. In this set, vision plays a very important role in accessing the environment around us, because 85% of the information about our surroundings is obtained through the eyes [
1]. Blindness is the state of condition in which a person is unable to sense information conveyed through the vision channel. People who have little vision capabilities and depend on another sensory organ are also considered as blind. Therefore, the visually challenged are people who have partial vision loss or total vision loss [
2].
According to the World Health Organization (WHO) and International Agency for Prevention of Blindness (IAPB), around 285 million people are visually impaired in the world, out of this, 39 million are blind [
3]. Blind individuals face enormous challenges in their daily routine and must rely on other people to accomplish some of their daily tasks. In addition, for displacement, they must use traditional blind sticks.
In this modern era where technology is everywhere and involved in almost every daily task, there have also been some advancements in blind stick technology. Indeed, researchers have developed blind sticks equipped with obstacle detection, GPS, and indoor navigation. In this information age, social media plays a very important role in connecting people around the world. To enable people with visual impairments to access these technologies, serval research initiatives have been undertaken. Companies such as Facebook are trying to make sure that information, depicted in their sites, is accessible to all kinds of users. Facebook plans to roll out AI-powered automatic alt-text to all screen readers. X (formerly Twitter) already has AI-captioning for image mode. One understands that such functionalities aim at assisting people with visual impairments in accessing social media environments [
4].
Empowering the visually impaired is not merely about enhancing accessibility; it is about enriching lives and breaking barriers. Beyond the realm of technology, our initiative strives to encourage individuals with visual impairments to embrace physical activity and social interaction, essential facets of a fulfilling life. Resnick [
5] underscores a critical issue: blind children often face a lack of motivation and opportunities for physical activity, leading to sedentary behavior and a sense of inadequacy. This trend continues into adulthood, as Modell [
6] and Jessup [
7] corroborate, highlighting that individuals with disabilities including visual impairments often participate less in recreational activities, leading to profound social isolation. Moreover, Folmer [
8] sheds light on the alarming consequences of limited physical activity among the visually impaired, which include delays in motor development and an increased susceptibility to various medical conditions.
Our research and the innovative smart cane architecture we propose are not only technological advancements but also beacons of empowerment. By seamlessly integrating advanced sensors, social media connectivity, and novel algorithms, our smart cane not only enhances mobility and accessibility, but also serves as a catalyst for encouraging physical activity and facilitating socialization among the visually impaired. We firmly believe that fostering a sense of independence and belonging in the visually impaired community is not just a goal; it is a societal responsibility. With our pioneering method, we are dedicated to linking the physical challenges faced by the visually impaired with the limitless potential for an active and socially connected existence [
9,
10,
11,
12].
This study presents a cutting-edge smart cane design aimed at empowering individuals with visual impairments. By incorporating advanced sensors and social media connectivity, the smart cane not only improves accessibility but also promotes physical activity. The implementation of three carefully crafted algorithms ensures precise step counting, swing detection, and proximity measurement.
Section 2 discusses the related work conducted in this domain and critically evaluates it. The architecture of the proposed smart cane model and components is presented in
Section 3.
Section 4 presents the results of the performance of the three developed algorithms followed by
Section 5, which discusses these results. Finally, the main conclusions are summarized in
Section 6.
2. Related Work
Social networks like Facebook and Twitter have become deeply embedded in modern life, enabling connection, communication, and community. Currently, a number of people are working to study social media. In fact, the effects of social media on a society are a well-studied phenomenon. However, for the millions of people worldwide with visual impairments, participating in these visual-centric platforms poses significant accessibility challenges that have historically excluded blind people from full usage and engagement [
13]. By enabling people to communicate and share information, social media plays a critical role in strengthening the bonds between the communities, spreading critical information. The value of social media varies among the different user groups. Many previous studies examined the engagement of different social groups with social media [
14]. According to the study by the Pew Research Center, 43% of American Internet users, older than 65, are using online social networks today, and the main function of social media for seniors is to connect them to their families [
15]. While discussing the integration of social media features within the smart cane for blind people, it is imperative to acknowledge worsening social isolation. While these technologies provide valuable communication opportunities, there is also a risk that individuals may start to rely only on virtual connections and interactions instead of face-to-face and real-life social engagement. In order to prevent an over-reliance on online social interaction, the smart cane was designed with a balanced approach. It enables the user to connect not only through social media platforms like Facebook, but also incorporates other different messaging channels such as direct messaging, etc. This ensures that individuals have various options to interact, minimizing the dependency on a single social media platform or mode of communication in a negative way.
Morris et al. found that mothers’ use of social media differed significantly before and after birth. It was found out that different social groups are embracing social media for distinct reasons, which affects the way they interact with social media [
16]. To enable blind people to live an independent life, researchers have developed many technologies because these devices are quite expensive, and common visually challenged people (VCP) cannot benefit from this. Our purposed device is focused on enabling these common VCP to live a normal life. The proposed model has many features that would enable them to interact with their environment independently [
17]. Innovations in assistive technologies are progressively dismantling barriers to enable fuller, more equitable social media participation and autonomy for the blind and visually impaired.
Screen magnification software can enlarge and optimize displays for those with residual vision. However, individuals without functional vision must rely on text-to-speech screen readers that vocalize onscreen text and labels. Screen readers such as VoiceOver for iOS and TalkBack for Android are built into smartphones, allowing users to navigate apps and hear menus, posts, messages, and more read aloud [
18].
Refreshable braille displays can connect to phones, converting text into tactile braille characters. Screen readers have significantly increased accessibility, though some functions like photo descriptions remain limited [
19]. Still, they establish a strong foundation for social media usage. In addition, dedicated apps tailored for blind people provide streamlined social media access. Easy Social is one popular app aggregating Facebook, Twitter, LinkedIn, and Instagram into a simplified interface with voiceover and customizable fonts/contrast. Blind-friendly apps enable posting statuses, commenting, messaging, and listening to feeds without visually parsing crowded layouts [
20]. However, app development tends to trail mainstream platforms. Discrepancies in features and delays in accessing new options persist as a drawback, though steady progress continues.
Vizwiz is a mobile application that enables blind people to take a picture of their environment and ask questions about the picture, where the app will answer their questions with screen reading software. In pilot testing, the answers were collected from the Amazon Mechanical Turk service. Mechanical Turk is an online marketplace of human intelligence tasks (HITs) that workers can complete for small amounts of money [
21].
In 2009, a poll of 62 blind people by the American Foundation for the Blind revealed that about half of the participants used Facebook, while a third used Twitter, and a quarter used LinkedIn and My Space. Moreover, in a 2010 study, Wentz and Lazar found that Facebook’s website was more difficult for blind users to navigate than Facebook’s mobile phone application. The ease of access may affect the frequency of use [
10]. Advance technologies enable blind people to identify the visual content in pictures; these include image recognition, crowd-powered systems, and tactile graphics. Further interaction with visual objects is also possible, for example, through the use of technologies that enable blind people to take better photos, and by enhancing the photo sharing experience with audio augmentations. Lučić, Sedlar, and Delić (2011) tested a prototype of the computer educational game Lugram for visually challenged children. They found that basic motor skills were important for a blind user to play Lugram. Initially, the blind children needed the help of sighted children, and afterward, they started playing on their own.
Research conducted by Ulrich (2011) led to the development of a cane that used robot technologies to assist blind people. It used ultrasonic sensors to detect obstacles and they found a new way by using the embedded computer. The steering action was accomplished by producing a noticeable force in the handle. Helal, Moore, and Ramachandran (2001) studied a wireless pedestrian navigation system for visually impaired people. This system is called Drishiti; it boosts the moving capability of a blind person and allows them to navigate freely [
11]. In this project, a new method was developed to enable blind people to use social media using a smart cane. The developed system will enable the user to use social media websites such as Facebook and Twitter. Jacob et al. conducted research on screen readers such as JAWS (Job access with speech) [
12] and NVDA (Nonvisual Desktop Access) [
12] along with the voiceover for iOS devices. To gain access to social media platforms, blind people significantly use these devices. Such tools provide text-to-speech abilities and Braille output, enabling the users to interact with the content [
12].
Braille displays have also been developed that are tactile devices and provide access to digital content or the content that is displayed on social media platforms. This process is helpful for a visually challenged person when the text is displayed in braille. These devices are considered to be beneficial as they enhance the social media experience of a user by providing a more tactile and interactive interface for visually impaired users. Research on these devices has been conducted by Kim (2019) [
22], where the authors developed a braille device to make it easy for visually impaired people to interact with online social media platforms.
Additionally, to post photos and videos, smart canes such as the WeWalk’s smart cane integrate cameras to recognize objects, faces, and text for audible identification [
23]. Users can take photos by tapping the cane and share them on social sites. Computer vision features will continue advancing, enabling more autonomous photo capturing. Limitations remain with image esthetics and the inability to independently assess the composition quality before sharing. Still, smart canes vastly widen participation. Additionally, linking services like Siri and Alexa allow for hands-free social media use, from dictating posts to asking for notifications to be read aloud [
9]. Commands like “Hey Siri, post to Facebook” streamline sharing by eliminating cumbersome typing. However, privacy risks arise with always-listening devices, and glitchy transcription can garble posts. Human-like voice assistants hold promise for managing increasingly natural conversational interactions.
Talkback and Voiceover are two text-to-speech software programs that have been developed by Folego [
24]. Here, Talkback can be used by Android users while Voiceover is for iOS users. Both help in navigating social media apps, which is undertaken by audibly describing the content available online, and voice commands are also provided. Thus, this makes it easy for a blind person to understand everything without requiring any help from someone.
Different social media platforms such as Facebook have introduced automatic alt text features that use image recognition technology to generate descriptions of the photos in the newsfeed of the user’s social profile. This feature provides visually impaired users with more context when they must engage with the visual content on online social platforms. In addition, another social media platform, for example, Twitter, also uses alt text for its blind users to add alternative text descriptions to the images posted on social media or in tweets, making visual content easily readable by individuals through screen readers. This type of feature enables the users to provide descriptions for the images they share on social media. Kuber et al. [
25] conducted research on determining the way through which these platforms use such features and developed mobile screen readers for users who are visually impaired.
Smith-Jackson et al. [
26] conducted research where they recommended the use of contoured shapes for improving and enhancing the grip, greater spacing among the buttons to assist the “perception of targets”, and additional awareness of the adoption/selection through feedback to aid the visually blind or even physically disabled users of mobile phones.
Singh et al. [
27] conducted research to help blind users use digital devices and innovations without another person’s assistance. The device also assists people with hearing aids and enables them to link to the digital world. The proposed framework is known as the “Haptic encoded language framework (HELF)”, which makes use of haptic technology to enable a blind person to write text digitally by making use of swiping gestures as well as comprehend the text via vibrations.
Resnick [
5] emphasized that blind children often lack motivation and opportunity for physical activity, resulting in sedentary behavior and feelings of inadequacy. Modell [
6] and Jessup [
7] further supported these findings, indicating that people with disabilities including visual impairments often participate less in recreational activities and experience social isolation. Folmer [
8] highlighted that a lack of physical activity is a concern for individuals with visual impairments, leading to delays in motor development and an increased risk of medical conditions.
In the literature, a number of sensor-based approaches have been discussed aimed at enhancing the participation of visually impaired people in different physical activities. These approaches include a range of technologies, for example, wearable sensors, haptic feedback systems, and auditory cues, which provide real-time feedback and assistance during activities such as walking, running, and sports.
For instance, researchers have explored the incorporation of measurement units (IMUs) into gadgets to track movement patterns and offer assistance to individuals with visual impairments while engaging in physical activities [
28]. These devices are capable of identifying alterations in posture walking style and orientation, providing auditory or tactile cues to help users maintain technique and navigate around obstacles [
28].
In addition, a haptic feedback system was also proposed by the authors of [
29] to enhance the sensory perception of blind individuals during physical activities. Such systems use vibratory or tactile stimuli to pass on information related to the environment like the presence of nearby objects or changes in terrain, enabling users to navigate confidently and safely [
29].
Moreover, the developments in wearable technology as well as machine learning algorithms have enabled the development of smooth navigation for visually impaired individuals. These navigations utilize sensors to detect obstacles, map out surroundings, and provide personalized guidance to users during outdoor activities like hiking or urban navigation [
30].
Researchers [
31] highlighted recent advancements in assistive technologies for the visually impaired, addressing challenges in mobility and daily life. With a focus on indoor and outdoor solutions, the paper explores location and feedback methods, offering valuable insights for the integration of smart cane technology.
The paper underscores the growing concern of visual impairment globally, with approximately 1.3 billion affected individuals, a number projected to triple by 2050. Addressing the challenges faced by the visually impaired, the proposed “Smart Cane device” leverages technological tools, specifically cloud computing and IoT wireless scanners, to enhance indoor navigation. In response to the limitations of traditional options such as white canes and guide dogs, the Smart Cane aims to seamlessly facilitate the displacement of visually impaired individuals, offering a novel solution for navigation and communication with their environment [
32].
In summary, a few studies [
9,
10,
11,
12] have indicated that individuals with visual impairments have limited engagement in physical activities, which can have negative effects on their health and well-being. The proposed approach has various unique features in comparison to existing solutions such as WeWalk. First, it is integrated with Facebook Chat API, enabling the user to use direct messaging and social interactions on this platform, thereby improving the accessibility for visually impaired people. Moreover, it also involves step challenge functionality, which fosters healthy competition as well as community engagement among the visually impaired individuals, and promotes a healthier lifestyle. Moreover, the system also integrates Raspberry Pi 4, which increases the connectivity and performance for smoother operation, ensuring a reliable user experience. Apart from these,
fbchat and Python Facebook API integration allow for effective communication with Facebook servers, helping with seamless interaction for the users. Speech Recognition Library integration is one of the most significant features of this device as it enables device management through voice commands, improving accessibility. This proposed solution fills the gap by combining health promotion, social interaction, and accessibility features tailored for blind people. These features make the device innovative and distinct in the domain of assistive technology for the visually impaired.
4. Results
In order to acquire the desired outcome from the proposed cane-stick device, three algorithms were tested. The first algorithm was designed to measure the number of steps taken by the user based on the data from the accelerometer. It sets some constants such as the minimum threshold for step detection (ThresholdMin), a detection time window (timeWindow) as well as the size of the window (WindowSize) for analyzing the data. In this algorithm, the “calculateAverage” function is used to determine the average of the circular buffer comprising of the accelerometer readings. After this, it is entered in a continuous loop and repeatedly reads data from the accelerometer and measures the magnitude; the value is stored in the circular buffer.
After some time, the average value in the buffer is calculated. If it is greater than the minimum threshold, it increases the step count, showing that steps have been detected. After this, the algorithm shifts the circular buffer by one position and the process continues. The loop keeps running until all measurements are taken. Finally, the final count of steps being detected is stored in the steps variable.
This algorithm was tested ten times, and the test results were compared among the counted steps by the algorithm with that of the actual number of steps taken by the user.
Table 1 shows the outcome achieved by implementing the first algorithm. It has been observed that the accuracy of this algorithm varies in different scenarios, sometimes, overestimating and sometimes underestimating the original step count.
Figure 4 shows the graphical representation of Algorithm 1 implementation.
Algorithm 1 Step Counter Algorithm Using Accelerometer Data and Moving Average Filter. |
// Define variables const thresholdMin = 0.1 // Minimum threshold for detecting a step const timeWindow = 100 // Time window for detection in milliseconds const windowSize = 10 // Size of the analysis window const buffer = array of size windowSize int steps = 0
// Function to calculate the average of the buffer function calculateAverage(buffer): sum = 0 for each value in buffer: sum = sum + value return sum / windowSize
// Loop to read data from the accelerometer while true: readAccelerometer() // Read accelerometer data accelerationNorm = norm(of the read data) // Calculate the norm of the acceleration
// Add the acceleration norm to the buffer buffer [current time % windowSize] = accelerationNorm
if current time >= timeWindow: average = calculateAverage(buffer) if average > thresholdMin: steps = steps + 1 shift the buffer by one position
wait(sampling interval) // Wait for some time between readings
// At the end of the measurement, the “steps” variable will contain the number of detected steps |
The second algorithm was designed to observe as well as count the number of “swings” that are made by the cane by making use of the lateral acceleration data acquired from the accelerometer. Initially, few of the variables were initialized similar to the first algorithm. Additionally, the buffer was used to store the lateral acceleration data.
The algorithm is comprised of a loop, which constantly reads the data from the accelerometer, particularly focusing on left–right movement. These values are stored in the buffer array. Afterward, the average values are calculated. If the average value exceeds the minimum threshold, then an increment in swing counter is achieved, showing that a swing has been detected. Next, the buffer is shifted one position to accommodate new data. The loop continues until all of the swings values are measured.
Figure 5 shows the graphical representation of Algorithm 2.
Table 2 shows the values obtained by implementing the second algorithm. Here, one step = one swing.
Algorithm 2 Step Counter Algorithm Using Lateral Accelerometer Data. |
// Define variables const thresholdMin = 0.1 // Minimum threshold for detecting a swing const timeWindow = 1000 // Time window for detection in milliseconds const buffer = array of size timeWindow int swings = 0
// Function to calculate the average of the buffer function calculateAverage(buffer): sum = 0 for each value in buffer: sum = sum + value return sum / timeWindow
// Loop to read data from the accelerometer while true: readAccelerometer() // Read accelerometer data lateralAcceleration = acceleration in the lateral direction (left-right)
// Add the lateral acceleration to the buffer buffer [current time % timeWindow] = lateralAcceleration
if current time >= timeWindow: average = calculateAverage(buffer) if average > thresholdMin: swings = swings + 1 shift the buffer by one position
wait(sampling interval) // Wait for some time between readings
// At the end of the measurement, the “swings” variable will contain the number of detected swings |
From the second algorithm, it was observed that a greater number of swings were counted than the original ones. This suggests that this algorithm is somewhat sensitive or prone to overestimating the swings under certain conditions.
In the third algorithm, a combination of step detection using the accelerometer and proximity measurements was applied by making use of the Bluetooth and Wi-Fi RSSI signals. Similar to the previous algorithms, the minimal threshold was set for step detection along with the initialization of the buffer to tear the lateral acceleration. For the proximity measurement, the rssiThreshold was considered. The maxDistance as well as two counter variables were taken into consideration for counting the Wi-Fi proximities and Bluetooth proximities.
To count the detected steps, the calculateAverage function was used and stored in the buffer. On exceeding the lateral acceleration, an increment in step counter was observed, indicating that a step had been detected.
For the initial processes, the same steps as that of the first two algorithms were followed, but later, the algorithm was designed to read the Bluetooth and Wi-Fi RSSI signal strength, and it was checked whether they fell within the threshold as well as the distance range, along with the increments of the respective proximity counters. The system design, then waits for the sampling interval prior to repeating the process.
At the end of the calculation, counts are provided by the algorithm for the detected steps, Wi-Fi and Bluetooth proximities in variable steps,
BluetoothDistance, and
wifiDistance, respectively.
Figure 6 shows the graphical representation of the implementation of Algorithm 3.
Table 3 shows the values measured by the implementation of the third algorithm.
Algorithm 3 Step Counter Algorithm Using Accelerometer and RSSI Signals. |
// Define variables for the accelerometer const minThreshold = 0.1 // Minimum threshold to detect a step const stepTimeWindow = 1000 // Time window for step detection in milliseconds const stepBuffer = array of size stepTimeWindow int steps = 0
// Define variables for distance measurement with RSSI const rssiThreshold = −70 // RSSI threshold to consider proximity const maxDistance = 10 // Maximum distance to consider proximity (in meters) int bluetoothDistances = 0 int wifiDistances = 0
// Function to calculate the average of the buffer function calculateAverage(buffer): sum = 0 for each value in buffer: sum = sum + value return sum / stepTimeWindow
// Loop for reading accelerometer data while true: readAccelerometer() // Read accelerometer data lateralAcceleration = acceleration in the lateral direction (left-right)
// Add lateral acceleration to the buffer stepBuffer [current time % stepTimeWindow] = lateralAcceleration
if current time >= stepTimeWindow:
averageStep = calculateAverage(stepBuffer) if averageStep > minThreshold: steps = steps + 1 shift the stepBuffer by one position
// Read Bluetooth and WiFi RSSI bluetoothSignalStrength = readBluetoothRSSI() wifiSignalStrength = readWifiRSSI()
// Check for proximity based on RSSI if bluetoothSignalStrength >= rssiThreshold && bluetoothSignalStrength <= maxDistance: bluetoothDistances = bluetoothDistances + 1 if wifiSignalStrength >= rssiThreshold && wifiSignalStrength <= maxDistance: wifiDistances = wifiDistances + 1
wait(sampling interval) // Wait for some time between readings
// At the end of the measurement, the “steps”, “bluetoothDistances”, and “wifiDistances” variables will contain the respective counts of detected steps, Bluetooth proximities, and WiFi proximities. |
In our study, we conducted extensive testing of the three algorithms for step count calculation using accelerometer data. To ensure robustness and consistency, each algorithm was subjected to ten repetitions by each of the participants. The results are visually represented in the following graph, providing a clear comparison of the performance of these algorithms.
5. Discussion
This research has presented and implemented three algorithms into a smart cane to measure the steps of a visually impaired person. These algorithms measure not only the steps, but also the swings and proximities by making use of Wi-Fi RSSI signals and Bluetooth.
From the analysis of Algorithm 1, it was noted that the step counting mechanism was influenced by the lateral movements of the cane, as visually impaired users often sweep the cane left and right to detect obstacles. This motion could lead to an overestimation or underestimation of the step count, as the algorithm did not adequately differentiate between forward steps and lateral cane movements. Furthermore, Algorithm 1 lacked validation for the user’s actual motion direction, whether they were moving forward, backward, or to the side. This led to fluctuating accuracy rates under different walking scenarios, highlighting the need for a more sophisticated algorithm that could discern the intended direction of travel and discriminate between obstacle detection sweeps and the actual steps taken.
In the second algorithm, which measured counts based on the swing detection phenomenon, an overestimation of the swing was acquired in comparison to the real count. This shows that this algorithm is sensitive to the overestimation of the swings under particular conditions.
Considering the third and last algorithm, which involved the step detection phenomenon along with proximity measurement, it was observed that the steps calculated by implementing this algorithm closely matched the real step count. This result shows that detection of steps by making use of the accelerometer provided the most accurate results. On the other hand, proximity assisted in counting the Bluetooth and Wi-Fi proximities.
The results show the effectiveness of Algorithm 3 in accurately detecting steps and also highlights the potential of a smart cane in helping visually challenged people in their daily lives by tracking steps as well as enabling social media interaction.
6. Conclusions
This research has not only presented the design of a smart cane aimed at improving the social media experiences of the visually impaired, but also recognized the essential role of technology in enhancing the personal safety of individuals as they navigate outside their homes. While the original study focused on integrating blind individuals into the digital age and improving independence through social media access, it is paramount to underscore the smart cane’s contribution to personal security.
The multifaceted design of the smart cane encompasses audio–tactile interaction, gesture detection, speech-to-text translation, and cloud connectivity through Bluetooth, which collectively serve to create a safer navigation experience. The addition of proximity sensors, GPS tracking, and emergency alert systems provides users with the confidence to explore their surroundings securely. The software components including Facebook chat API and the advanced step count algorithm are complemented by the device’s voice recognition capabilities, which not only enhance the user interaction with social media, but also bolster the users’ safety by allowing hands-free operation and immediate access to assistance if needed.
Algorithm 3, in particular, demonstrated superior performance in step count accuracy, which is integral to the safety features, as precise step and swing detection are crucial for avoiding obstacles and hazards.
Future work including user evaluations with visually impaired individuals will not only assess the smart cane’s usability and effectiveness in real-world scenarios, but will also prioritize the evaluation of its safety features. Ensuring the practical usability of the smart cane includes a thorough validation of its security and emergency response systems, which are vital for the safety and well-being of its users. By emphasizing personal safety alongside social media enhancement, the smart cane represents a holistic approach to supporting the visually impaired in their quest for a more independent and secure lifestyle.