Next Article in Journal
Development of a Multi-Radio Device for Dry Container Monitoring and Tracking
Previous Article in Journal
A Wearable Internet of Things Device for Noninvasive Remote Monitoring of Vital Signs Related to Heart Failure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of Smart Cane with Social Media: Design of a New Step Counter Algorithm for Cane

by
Mohamed Dhiaeddine Messaoudi
,
Bob-Antoine J. Menelas
* and
Hamid Mcheick
Department of Computer Sciences and Mathematics, University of Quebec at Chicoutimi, Chicoutimi, QC G7H 2B1, Canada
*
Author to whom correspondence should be addressed.
IoT 2024, 5(1), 168-186; https://doi.org/10.3390/iot5010009
Submission received: 14 January 2024 / Revised: 24 February 2024 / Accepted: 27 February 2024 / Published: 14 March 2024

Abstract

:
This research introduces an innovative smart cane architecture designed to empower visually impaired individuals. Integrating advanced sensors and social media connectivity, the smart cane enhances accessibility and encourages physical activity. Three meticulously developed algorithms ensure accurate step counting, swing detection, and proximity measurement. The smart cane’s architecture comprises the platform, communications, sensors, calculation, and user interface layers, providing comprehensive assistance for visually impaired individuals. Hardware components include an audio–tactile interaction module, input command module, microphone integration, local storage, step count module, cloud integration, and rechargeable battery. Software v1.9.7 components include Facebook Chat API integration, Python Facebook API integration, fbchat library integration, and Speech Recognition library integration. Overall, the proposed smart cane offers a comprehensive solution to enhance mobility, accessibility, and social engagement for visually impaired individuals. This study represents a significant stride toward a more inclusive society, leveraging technology to create meaningful impact in the lives of those with visual impairments. By fostering socialization and independence, our smart cane not only improves mobility but also enhances the overall well-being of the visually impaired community.

1. Introduction

Humans have at their disposal several sensory motor channels to perceive the environment. In this set, vision plays a very important role in accessing the environment around us, because 85% of the information about our surroundings is obtained through the eyes [1]. Blindness is the state of condition in which a person is unable to sense information conveyed through the vision channel. People who have little vision capabilities and depend on another sensory organ are also considered as blind. Therefore, the visually challenged are people who have partial vision loss or total vision loss [2].
According to the World Health Organization (WHO) and International Agency for Prevention of Blindness (IAPB), around 285 million people are visually impaired in the world, out of this, 39 million are blind [3]. Blind individuals face enormous challenges in their daily routine and must rely on other people to accomplish some of their daily tasks. In addition, for displacement, they must use traditional blind sticks.
In this modern era where technology is everywhere and involved in almost every daily task, there have also been some advancements in blind stick technology. Indeed, researchers have developed blind sticks equipped with obstacle detection, GPS, and indoor navigation. In this information age, social media plays a very important role in connecting people around the world. To enable people with visual impairments to access these technologies, serval research initiatives have been undertaken. Companies such as Facebook are trying to make sure that information, depicted in their sites, is accessible to all kinds of users. Facebook plans to roll out AI-powered automatic alt-text to all screen readers. X (formerly Twitter) already has AI-captioning for image mode. One understands that such functionalities aim at assisting people with visual impairments in accessing social media environments [4].
Empowering the visually impaired is not merely about enhancing accessibility; it is about enriching lives and breaking barriers. Beyond the realm of technology, our initiative strives to encourage individuals with visual impairments to embrace physical activity and social interaction, essential facets of a fulfilling life. Resnick [5] underscores a critical issue: blind children often face a lack of motivation and opportunities for physical activity, leading to sedentary behavior and a sense of inadequacy. This trend continues into adulthood, as Modell [6] and Jessup [7] corroborate, highlighting that individuals with disabilities including visual impairments often participate less in recreational activities, leading to profound social isolation. Moreover, Folmer [8] sheds light on the alarming consequences of limited physical activity among the visually impaired, which include delays in motor development and an increased susceptibility to various medical conditions.
Our research and the innovative smart cane architecture we propose are not only technological advancements but also beacons of empowerment. By seamlessly integrating advanced sensors, social media connectivity, and novel algorithms, our smart cane not only enhances mobility and accessibility, but also serves as a catalyst for encouraging physical activity and facilitating socialization among the visually impaired. We firmly believe that fostering a sense of independence and belonging in the visually impaired community is not just a goal; it is a societal responsibility. With our pioneering method, we are dedicated to linking the physical challenges faced by the visually impaired with the limitless potential for an active and socially connected existence [9,10,11,12].
This study presents a cutting-edge smart cane design aimed at empowering individuals with visual impairments. By incorporating advanced sensors and social media connectivity, the smart cane not only improves accessibility but also promotes physical activity. The implementation of three carefully crafted algorithms ensures precise step counting, swing detection, and proximity measurement. Section 2 discusses the related work conducted in this domain and critically evaluates it. The architecture of the proposed smart cane model and components is presented in Section 3. Section 4 presents the results of the performance of the three developed algorithms followed by Section 5, which discusses these results. Finally, the main conclusions are summarized in Section 6.

2. Related Work

Social networks like Facebook and Twitter have become deeply embedded in modern life, enabling connection, communication, and community. Currently, a number of people are working to study social media. In fact, the effects of social media on a society are a well-studied phenomenon. However, for the millions of people worldwide with visual impairments, participating in these visual-centric platforms poses significant accessibility challenges that have historically excluded blind people from full usage and engagement [13]. By enabling people to communicate and share information, social media plays a critical role in strengthening the bonds between the communities, spreading critical information. The value of social media varies among the different user groups. Many previous studies examined the engagement of different social groups with social media [14]. According to the study by the Pew Research Center, 43% of American Internet users, older than 65, are using online social networks today, and the main function of social media for seniors is to connect them to their families [15]. While discussing the integration of social media features within the smart cane for blind people, it is imperative to acknowledge worsening social isolation. While these technologies provide valuable communication opportunities, there is also a risk that individuals may start to rely only on virtual connections and interactions instead of face-to-face and real-life social engagement. In order to prevent an over-reliance on online social interaction, the smart cane was designed with a balanced approach. It enables the user to connect not only through social media platforms like Facebook, but also incorporates other different messaging channels such as direct messaging, etc. This ensures that individuals have various options to interact, minimizing the dependency on a single social media platform or mode of communication in a negative way.
Morris et al. found that mothers’ use of social media differed significantly before and after birth. It was found out that different social groups are embracing social media for distinct reasons, which affects the way they interact with social media [16]. To enable blind people to live an independent life, researchers have developed many technologies because these devices are quite expensive, and common visually challenged people (VCP) cannot benefit from this. Our purposed device is focused on enabling these common VCP to live a normal life. The proposed model has many features that would enable them to interact with their environment independently [17]. Innovations in assistive technologies are progressively dismantling barriers to enable fuller, more equitable social media participation and autonomy for the blind and visually impaired.
Screen magnification software can enlarge and optimize displays for those with residual vision. However, individuals without functional vision must rely on text-to-speech screen readers that vocalize onscreen text and labels. Screen readers such as VoiceOver for iOS and TalkBack for Android are built into smartphones, allowing users to navigate apps and hear menus, posts, messages, and more read aloud [18].
Refreshable braille displays can connect to phones, converting text into tactile braille characters. Screen readers have significantly increased accessibility, though some functions like photo descriptions remain limited [19]. Still, they establish a strong foundation for social media usage. In addition, dedicated apps tailored for blind people provide streamlined social media access. Easy Social is one popular app aggregating Facebook, Twitter, LinkedIn, and Instagram into a simplified interface with voiceover and customizable fonts/contrast. Blind-friendly apps enable posting statuses, commenting, messaging, and listening to feeds without visually parsing crowded layouts [20]. However, app development tends to trail mainstream platforms. Discrepancies in features and delays in accessing new options persist as a drawback, though steady progress continues.
Vizwiz is a mobile application that enables blind people to take a picture of their environment and ask questions about the picture, where the app will answer their questions with screen reading software. In pilot testing, the answers were collected from the Amazon Mechanical Turk service. Mechanical Turk is an online marketplace of human intelligence tasks (HITs) that workers can complete for small amounts of money [21].
In 2009, a poll of 62 blind people by the American Foundation for the Blind revealed that about half of the participants used Facebook, while a third used Twitter, and a quarter used LinkedIn and My Space. Moreover, in a 2010 study, Wentz and Lazar found that Facebook’s website was more difficult for blind users to navigate than Facebook’s mobile phone application. The ease of access may affect the frequency of use [10]. Advance technologies enable blind people to identify the visual content in pictures; these include image recognition, crowd-powered systems, and tactile graphics. Further interaction with visual objects is also possible, for example, through the use of technologies that enable blind people to take better photos, and by enhancing the photo sharing experience with audio augmentations. Lučić, Sedlar, and Delić (2011) tested a prototype of the computer educational game Lugram for visually challenged children. They found that basic motor skills were important for a blind user to play Lugram. Initially, the blind children needed the help of sighted children, and afterward, they started playing on their own.
Research conducted by Ulrich (2011) led to the development of a cane that used robot technologies to assist blind people. It used ultrasonic sensors to detect obstacles and they found a new way by using the embedded computer. The steering action was accomplished by producing a noticeable force in the handle. Helal, Moore, and Ramachandran (2001) studied a wireless pedestrian navigation system for visually impaired people. This system is called Drishiti; it boosts the moving capability of a blind person and allows them to navigate freely [11]. In this project, a new method was developed to enable blind people to use social media using a smart cane. The developed system will enable the user to use social media websites such as Facebook and Twitter. Jacob et al. conducted research on screen readers such as JAWS (Job access with speech) [12] and NVDA (Nonvisual Desktop Access) [12] along with the voiceover for iOS devices. To gain access to social media platforms, blind people significantly use these devices. Such tools provide text-to-speech abilities and Braille output, enabling the users to interact with the content [12].
Braille displays have also been developed that are tactile devices and provide access to digital content or the content that is displayed on social media platforms. This process is helpful for a visually challenged person when the text is displayed in braille. These devices are considered to be beneficial as they enhance the social media experience of a user by providing a more tactile and interactive interface for visually impaired users. Research on these devices has been conducted by Kim (2019) [22], where the authors developed a braille device to make it easy for visually impaired people to interact with online social media platforms.
Additionally, to post photos and videos, smart canes such as the WeWalk’s smart cane integrate cameras to recognize objects, faces, and text for audible identification [23]. Users can take photos by tapping the cane and share them on social sites. Computer vision features will continue advancing, enabling more autonomous photo capturing. Limitations remain with image esthetics and the inability to independently assess the composition quality before sharing. Still, smart canes vastly widen participation. Additionally, linking services like Siri and Alexa allow for hands-free social media use, from dictating posts to asking for notifications to be read aloud [9]. Commands like “Hey Siri, post to Facebook” streamline sharing by eliminating cumbersome typing. However, privacy risks arise with always-listening devices, and glitchy transcription can garble posts. Human-like voice assistants hold promise for managing increasingly natural conversational interactions.
Talkback and Voiceover are two text-to-speech software programs that have been developed by Folego [24]. Here, Talkback can be used by Android users while Voiceover is for iOS users. Both help in navigating social media apps, which is undertaken by audibly describing the content available online, and voice commands are also provided. Thus, this makes it easy for a blind person to understand everything without requiring any help from someone.
Different social media platforms such as Facebook have introduced automatic alt text features that use image recognition technology to generate descriptions of the photos in the newsfeed of the user’s social profile. This feature provides visually impaired users with more context when they must engage with the visual content on online social platforms. In addition, another social media platform, for example, Twitter, also uses alt text for its blind users to add alternative text descriptions to the images posted on social media or in tweets, making visual content easily readable by individuals through screen readers. This type of feature enables the users to provide descriptions for the images they share on social media. Kuber et al. [25] conducted research on determining the way through which these platforms use such features and developed mobile screen readers for users who are visually impaired.
Smith-Jackson et al. [26] conducted research where they recommended the use of contoured shapes for improving and enhancing the grip, greater spacing among the buttons to assist the “perception of targets”, and additional awareness of the adoption/selection through feedback to aid the visually blind or even physically disabled users of mobile phones.
Singh et al. [27] conducted research to help blind users use digital devices and innovations without another person’s assistance. The device also assists people with hearing aids and enables them to link to the digital world. The proposed framework is known as the “Haptic encoded language framework (HELF)”, which makes use of haptic technology to enable a blind person to write text digitally by making use of swiping gestures as well as comprehend the text via vibrations.
Resnick [5] emphasized that blind children often lack motivation and opportunity for physical activity, resulting in sedentary behavior and feelings of inadequacy. Modell [6] and Jessup [7] further supported these findings, indicating that people with disabilities including visual impairments often participate less in recreational activities and experience social isolation. Folmer [8] highlighted that a lack of physical activity is a concern for individuals with visual impairments, leading to delays in motor development and an increased risk of medical conditions.
In the literature, a number of sensor-based approaches have been discussed aimed at enhancing the participation of visually impaired people in different physical activities. These approaches include a range of technologies, for example, wearable sensors, haptic feedback systems, and auditory cues, which provide real-time feedback and assistance during activities such as walking, running, and sports.
For instance, researchers have explored the incorporation of measurement units (IMUs) into gadgets to track movement patterns and offer assistance to individuals with visual impairments while engaging in physical activities [28]. These devices are capable of identifying alterations in posture walking style and orientation, providing auditory or tactile cues to help users maintain technique and navigate around obstacles [28].
In addition, a haptic feedback system was also proposed by the authors of [29] to enhance the sensory perception of blind individuals during physical activities. Such systems use vibratory or tactile stimuli to pass on information related to the environment like the presence of nearby objects or changes in terrain, enabling users to navigate confidently and safely [29].
Moreover, the developments in wearable technology as well as machine learning algorithms have enabled the development of smooth navigation for visually impaired individuals. These navigations utilize sensors to detect obstacles, map out surroundings, and provide personalized guidance to users during outdoor activities like hiking or urban navigation [30].
Researchers [31] highlighted recent advancements in assistive technologies for the visually impaired, addressing challenges in mobility and daily life. With a focus on indoor and outdoor solutions, the paper explores location and feedback methods, offering valuable insights for the integration of smart cane technology.
The paper underscores the growing concern of visual impairment globally, with approximately 1.3 billion affected individuals, a number projected to triple by 2050. Addressing the challenges faced by the visually impaired, the proposed “Smart Cane device” leverages technological tools, specifically cloud computing and IoT wireless scanners, to enhance indoor navigation. In response to the limitations of traditional options such as white canes and guide dogs, the Smart Cane aims to seamlessly facilitate the displacement of visually impaired individuals, offering a novel solution for navigation and communication with their environment [32].
In summary, a few studies [9,10,11,12] have indicated that individuals with visual impairments have limited engagement in physical activities, which can have negative effects on their health and well-being. The proposed approach has various unique features in comparison to existing solutions such as WeWalk. First, it is integrated with Facebook Chat API, enabling the user to use direct messaging and social interactions on this platform, thereby improving the accessibility for visually impaired people. Moreover, it also involves step challenge functionality, which fosters healthy competition as well as community engagement among the visually impaired individuals, and promotes a healthier lifestyle. Moreover, the system also integrates Raspberry Pi 4, which increases the connectivity and performance for smoother operation, ensuring a reliable user experience. Apart from these, fbchat and Python Facebook API integration allow for effective communication with Facebook servers, helping with seamless interaction for the users. Speech Recognition Library integration is one of the most significant features of this device as it enables device management through voice commands, improving accessibility. This proposed solution fills the gap by combining health promotion, social interaction, and accessibility features tailored for blind people. These features make the device innovative and distinct in the domain of assistive technology for the visually impaired.

3. Smart Cane Architectural Model and Components

This research work proposes a smart cane integrated with technological advancements and incorporated with advanced sensors, social media connectivity, and algorithm. It has the ability to enhance mobility and accessibility while serving as the catalyst to encourage physical activity and build socialization among blind people. The proposed approach empowers the blind individual while increasing their social and physical activity. The implementation of three carefully crafted algorithms ensures precise step counting, swing detection, and proximity measurement.
The architecture of the smart cane was designed to provide a comprehensive set of functionalities to assist visually impaired individuals in their daily lives. This architectural model comprises several distinct layers, each responsible for specific functions and interactions.

3.1. Architectural Model

3.1.1. Platform Layer

The platform layer serves as the foundation of the architecture, providing basic hardware and software resources required for the smart cane’s operation. This can include the underlying hardware, operating system, device drivers, etc.

3.1.2. Communications Layer

The communications layer facilitates the exchange of information between the smart cane and other devices or systems. This can include wireless communication via Bluetooth or Wi-Fi, allowing the cane to connect to other devices.

3.1.3. Sensor Layer

This layer is equipped with sensors such as a camera, ultrasonic sensor, and accelerometer. These sensors collect data about the user’s environment, helping the cane detect obstacles, count steps, and interact with the external world.

3.1.4. Calculation (Operations) Layer

The calculation layer processes the data collected by the sensors. It performs calculations and operations on this data including obstacle detection from camera images, processing ultrasonic signals to measure obstacle distances, and analyzing accelerometer movements.

3.1.5. User Interface Layer

This layer provides the interface between the smart cane and user. It includes components such as speech synthesis to provide information to the user, speech recognition to receive voice commands, and hand gesture sensors for more complex interactions. This layer enables the user to communicate with the cane and receive feedback. Figure 1 shows layered architectural model for the smart cane.
Each layer interacts with the others to allow the smart cane to function effectively and intuitively. Data are collected by sensors, processed in the calculation layer, and results are presented to the user through the user interface. The communication layer also allows the cane to interact with other devices and services, providing an enriched and connected user experience.

3.2. Hardware Components

3.2.1. Audio–Tactile Interaction Module

This module serves as the primary information delivery mechanism for the users of the smart cane. It encompasses two main components:
Headphone:
This output device is crucial in facilitating the text-to-speech feature of the smart cane. With the help of a text-to-speech engine, written messages and notifications from social media are translated into auditory messages. The system of the smart cane is designed to interpret the content of the messages and convert them into clear, audible speech, which is then delivered through the headphone. This functionality ensures that users are informed of any social media activities in real-time, enhancing their ability to respond promptly and be actively engaged.
DC Motor Vibration:
This forms the tactile part of the interaction. The haptic feedback mechanism leverages a DC motor that triggers vibrations whenever there are notifications such as incoming messages, friend requests, or likes on a post on the user’s social media profiles. The vibration intensity can be customized according to the user’s preference, ensuring comfort and ease of use. This non-auditory alert system serves as an efficient and discrete method of notification, reducing reliance on auditory signals alone.

3.2.2. Input Command Module

Users can interface with and operate the smart cane system using this module, which consists of two essential parts:
High-sensitivity microphone:
This module allows for the correct recording of speech orders. By speaking directly into the microphone, users may say commands like “send a message”, “like this post”, or “scroll through menus”. These vocal instructions are processed by the speech-to-text technology integrated into the smart cane, making it possible to operate without the use of hands nor writing.
Gesture Sensor:
The gesture sensor detects hand gestures using optical or motion-detecting technologies and converts them into navigational commands. For instance, in the Messenger app, swiping to the right may represent moving on to the following discussion, while swiping up could mean scrolling up the stream. In the same way, a swipe to the right could mean moving to the next conversation in the Messenger app, and a swipe up could indicate scrolling up the feed. This gesture control system provides a tactile, intuitive way for users to interact with their social media accounts.

3.2.3. Microphone Integration for Speech Recognition

A microphone for voice recognition was added to the smart cane system, which is a considerable improvement. The gadget gains voice control capabilities through this connection, enabling users to communicate with it verbally. The speech-to-text algorithm operating on the Raspberry Pi 4 converts the user’s spoken commands into text after being captured by the microphone. This feature makes it easy to navigate the system’s menus and choices. Users may utilize voice commands to send messages, make menu selections, create or accept challenges, administer groups, and carry out other tasks. Figure 2 shows the working of smart cane interconnections.

3.2.4. Local Storage

The smart cane has an internal digital storage system called local storage that allows it to briefly preserve data before sending it to the cloud. This can include data on step totals, user preferences, command history, and interaction logs. The local storage has two purposes: it guarantees the device functions independently even without a constant Internet connection and acts as a buffer for data storage when quick access to cloud storage is not possible (due to connectivity problems or other reasons). This feature gives the smart cane system the flexibility to operate efficiently in various circumstances. The personal data of smart cane users are not sent to the cloud; the data sent to the cloud includes system data and the daily step counts with the canes’ IDs.

3.2.5. Step Count Module

The step count module is an inventive feature that measures physical activity. It uses an accelerometer not only to tally the user’s steps, but also to detect the orientation and acceleration of the cane. By analyzing variations in the acceleration data, the accelerometer can determine the direction of movement, offering users real-time feedback about their orientation. This feedback is essential for visually impaired individuals to navigate effectively, ensuring that they maintain a straight path or adjust their direction as needed.
As the movement and speed of each step are detected, these parameters are transformed into digital information, allowing for a comprehensive analysis of the user’s gait and walking patterns. The integration of an accelerometer with a gyroscope further refines this capability by providing a three-dimensional understanding of the cane’s position, which is crucial for determining the user’s orientation in space.
The data collected are stored locally on the smart cane’s built-in storage and can be synchronized with social media platforms for challenges or health tracking purposes. This encourages users to stay active while also providing a valuable set of data that can be used for navigation assistance. With this advanced feature, the smart cane not only promotes physical activity, but also enhances the user’s orientation and safety, reinforcing their confidence as they engage with their environment.

3.2.6. Cloud Integration

A crucial component of the smart cane system that handles data management is the cloud integration. The system automatically uploads all locally saved information such as the number of steps, logs of performed activities, and user preferences to secure cloud storage at the end of each day. This not only assures the security of the data but also enables data analysis for system upgrades and customized user experiences.

3.2.7. Battery

An electrical power bank that can be recharged powers the smart cane system. To provide the gadget with a lengthy operating period, this battery module offers a portable yet potent energy supply. Using a standard charging wire, the power bank is easily rechargeable. Because of its large capacity, the smart cane can accommodate the power needs of multiple modules including the accelerometer, gesture sensor, microphone, and other parts for extended periods. Because of this, users can depend on the smart cane throughout the day, increasing their independence and self-assurance while they utilize social media and engage with others. We have also developed different modes to save energy. For example, Eco mode can be turned on for smart usage in order for a longer usage of battery. Offline mode is also helpful when there is no need to communicate with the cloud server, thus consuming less battery.

3.3. Software Components

3.3.1. Facebook Chat API Integration

The smart cane uses Facebook Chat API to communicate with Facebook’s messaging platform directly. Through the smart cane, users can now send and receive messages, view alerts, and carry out other Facebook-related actions. It provides a smooth, integrated solution that increases accessibility for those with visual impairments on the most popular social networking site.
The integration of a step counter algorithm allows the device to accurately count the steps a user takes, fostering both a healthy lifestyle and social interaction through the creation of challenges.
Main Menu:
This is the first layer of user interaction with the smart cane system. It includes:
Messages: This option gives users access to their Messenger inbox, allowing them to listen to their messages through the headphones.
New Message: This feature enables users to compose and send a new message using voice commands.
Open New Message: This functionality gives users the ability to open and listen to new, unread messages.
Group Manager:
This is a feature that allows users to manage their group chats on Messenger. It includes options to:
Add Group: Users can create a new group chat using voice commands.
Update Group: Allows users to make changes to existing group chats such as adding or removing members or changing the group’s name.
Delete: This function enables users to remove a group chat from their list.
Challenge:
A unique feature of the smart cane system that enhances user engagement is that it creates step challenges. The smart cane system’s challenge function is intended to promote friendly competition and interpersonal engagement among users. It enables users who are blind or visually impaired to take part in step challenges, fostering a healthy lifestyle and a sense of community.
Create New Challenge:
  • The user who initiates a new step challenge acts as the administrator. They must provide the duration of the challenge (in days) and assign a name to it. As the administrator, they have the authority to add or remove participants, giving them control over the participants in the challenge.
  • Once the challenge is created, the system can automatically generate and send invitations to potential participants. This includes sending challenge requests to the top 10 active discussions in the user’s Messenger. Additionally, the administrator can manually add or invite people who are not in the top 10, ensuring flexibility in participant selection.
  • Invited participants have the authority to accept or refuse the challenge. If they accept, they are automatically added to the challenge by the system, and they can begin contributing to the step count.
  • During the challenge, users can check the statistics such as who has the best score. This real-time tracking allows participants to see their progress and standings before the challenge finishes, adding excitement and motivation.
  • Participants in the challenge can communicate within a group chat, allowing them to motivate each other, share progress, and foster camaraderie. This social aspect enhances engagement and creates a supportive community around the challenge.
  • Every day at 11 p.m., an update of the daily step count is sent to all participants. At the end of the challenge, the final standings are shared, and the winners can be celebrated.
The step count, captured using the accelerometer and step counter algorithm, is stored locally on the smart cane and uploaded to the cloud whenever Internet connectivity is available. The step count algorithm in the smart cane system is a sophisticated method that accurately measures the physical activity of the user, specifically the number of steps taken. It combines the use of an accelerometer and triangulation techniques to calculate both the steps and the distance traveled.
The accelerometer is a sensor that measures the acceleration forces exerted on the smart cane. These forces can be used to detect the motion and speed of each step. As the user walks, the accelerometer detects the distinct movement patterns associated with each step. By analyzing these patterns, the algorithm can accurately count the number of steps taken.
In addition to step counting, the system calculates the distance traveled by using triangulation techniques with Wi-Fi signals. The smart cane detects Wi-Fi signals from known access points and calculates the distance between the cane and each access point. By measuring the distances to multiple access points and knowing their locations, the system can triangulate the user’s position. Repeating this process over time allows the system to track the user’s movement and calculate the total distance traveled.
This integration of the step counter algorithm and challenge feature brings a novel and engaging aspect to the smart cane system, allowing visually impaired users to participate in a health-centric social activity. It not only promotes a healthy lifestyle, but also enhances their social life, thus fostering a sense of community and camaraderie.

3.3.2. Raspberry Pi 4 Integration

The smart cane system incorporates a Raspberry Pi 4 single-board computer. The system gains extra memory options, dual-band wireless networking capabilities, and improved processor power via this switch. It enables the different parts including the accelerometer, gesture sensor, and audio–tactile interaction module to operate more effectively and dependably. The Raspberry Pi 4’s improved connectivity choices allow the device to execute tasks such as uploading step counts, receiving messages from social networking platforms, and other functions that call for Internet access without problems.

3.3.3. Python Facebook API Integration

The python-facebook-api is a robust library that simplifies the process of interacting with Facebook’s Graph API. It allows the smart cane system to connect and interact directly with Facebook’s servers. It is responsible for a number of functionalities including fetching user messages, sending new messages, creating groups, and managing group chats. The python-facebook-api provides an efficient and secure way to communicate with Facebook, enhancing the system’s functionality and reliability.

3.3.4. fbchat Library Integration

Another significant library utilized in the smart cane system is FB chat. It is a client library for Facebook Messenger that enables direct communication between the design and the Messenger network. It can perform a wide range of tasks including sending and receiving messages, retrieving discussions from the recent past, maintaining seen marks, typing indicators, and more. The foundation of the system’s social media interaction capabilities is made up of the fbchat library and the python-Facebook-API.

3.3.5. Speech Recognition Library Integration

The smart cane system’s voice command feature is based on the SpeechRecognition library. It is an effective technique for turning spoken words into written text. The SpeechRecognition library converts spoken words into text when a user speaks into the smart cane’s microphone. Raspberry Pi 4 then processes this text. The library is perfect for the smart cane system since it supports several languages and has remarkable recognition accuracy. It offers a more accessible and intuitive user experience by enabling voice commands to be used by users to manage the device.
The block diagram for the proposed smart cane is illustrated in Figure 3.

4. Results

In order to acquire the desired outcome from the proposed cane-stick device, three algorithms were tested. The first algorithm was designed to measure the number of steps taken by the user based on the data from the accelerometer. It sets some constants such as the minimum threshold for step detection (ThresholdMin), a detection time window (timeWindow) as well as the size of the window (WindowSize) for analyzing the data. In this algorithm, the “calculateAverage” function is used to determine the average of the circular buffer comprising of the accelerometer readings. After this, it is entered in a continuous loop and repeatedly reads data from the accelerometer and measures the magnitude; the value is stored in the circular buffer.
After some time, the average value in the buffer is calculated. If it is greater than the minimum threshold, it increases the step count, showing that steps have been detected. After this, the algorithm shifts the circular buffer by one position and the process continues. The loop keeps running until all measurements are taken. Finally, the final count of steps being detected is stored in the steps variable.
This algorithm was tested ten times, and the test results were compared among the counted steps by the algorithm with that of the actual number of steps taken by the user. Table 1 shows the outcome achieved by implementing the first algorithm. It has been observed that the accuracy of this algorithm varies in different scenarios, sometimes, overestimating and sometimes underestimating the original step count. Figure 4 shows the graphical representation of Algorithm 1 implementation.
Algorithm 1 Step Counter Algorithm Using Accelerometer Data and Moving Average Filter.
// Define variables
const thresholdMin = 0.1 // Minimum threshold for detecting a step
const timeWindow = 100 // Time window for detection in milliseconds
const windowSize = 10 // Size of the analysis window
const buffer = array of size windowSize
int steps = 0

// Function to calculate the average of the buffer
function calculateAverage(buffer):
  sum = 0
  for each value in buffer:
    sum = sum + value
  return sum / windowSize

// Loop to read data from the accelerometer
while true:
  readAccelerometer() // Read accelerometer data
  accelerationNorm = norm(of the read data) // Calculate the norm of the acceleration

  // Add the acceleration norm to the buffer
  buffer [current time % windowSize] = accelerationNorm

  if current time >= timeWindow:
    average = calculateAverage(buffer)
    if average > thresholdMin:
      steps = steps + 1
    shift the buffer by one position

  wait(sampling interval) // Wait for some time between readings

// At the end of the measurement, the “steps” variable will contain the number of detected steps
The second algorithm was designed to observe as well as count the number of “swings” that are made by the cane by making use of the lateral acceleration data acquired from the accelerometer. Initially, few of the variables were initialized similar to the first algorithm. Additionally, the buffer was used to store the lateral acceleration data.
The algorithm is comprised of a loop, which constantly reads the data from the accelerometer, particularly focusing on left–right movement. These values are stored in the buffer array. Afterward, the average values are calculated. If the average value exceeds the minimum threshold, then an increment in swing counter is achieved, showing that a swing has been detected. Next, the buffer is shifted one position to accommodate new data. The loop continues until all of the swings values are measured. Figure 5 shows the graphical representation of Algorithm 2. Table 2 shows the values obtained by implementing the second algorithm. Here, one step = one swing.
Algorithm 2 Step Counter Algorithm Using Lateral Accelerometer Data.
// Define variables
const thresholdMin = 0.1 // Minimum threshold for detecting a swing
const timeWindow = 1000 // Time window for detection in milliseconds
const buffer = array of size timeWindow
int swings = 0

// Function to calculate the average of the buffer
function calculateAverage(buffer):
  sum = 0
  for each value in buffer:
    sum = sum + value
  return sum / timeWindow

// Loop to read data from the accelerometer
while true:
  readAccelerometer() // Read accelerometer data
  lateralAcceleration = acceleration in the lateral direction (left-right)

  // Add the lateral acceleration to the buffer
  buffer [current time % timeWindow] = lateralAcceleration

  if current time >= timeWindow:
    average = calculateAverage(buffer)
    if average > thresholdMin:
      swings = swings + 1
    shift the buffer by one position

  wait(sampling interval) // Wait for some time between readings

// At the end of the measurement, the “swings” variable will contain the number of detected swings
From the second algorithm, it was observed that a greater number of swings were counted than the original ones. This suggests that this algorithm is somewhat sensitive or prone to overestimating the swings under certain conditions.
In the third algorithm, a combination of step detection using the accelerometer and proximity measurements was applied by making use of the Bluetooth and Wi-Fi RSSI signals. Similar to the previous algorithms, the minimal threshold was set for step detection along with the initialization of the buffer to tear the lateral acceleration. For the proximity measurement, the rssiThreshold was considered. The maxDistance as well as two counter variables were taken into consideration for counting the Wi-Fi proximities and Bluetooth proximities.
To count the detected steps, the calculateAverage function was used and stored in the buffer. On exceeding the lateral acceleration, an increment in step counter was observed, indicating that a step had been detected.
For the initial processes, the same steps as that of the first two algorithms were followed, but later, the algorithm was designed to read the Bluetooth and Wi-Fi RSSI signal strength, and it was checked whether they fell within the threshold as well as the distance range, along with the increments of the respective proximity counters. The system design, then waits for the sampling interval prior to repeating the process.
At the end of the calculation, counts are provided by the algorithm for the detected steps, Wi-Fi and Bluetooth proximities in variable steps, BluetoothDistance, and wifiDistance, respectively. Figure 6 shows the graphical representation of the implementation of Algorithm 3. Table 3 shows the values measured by the implementation of the third algorithm.
Algorithm 3 Step Counter Algorithm Using Accelerometer and RSSI Signals.
// Define variables for the accelerometer
const minThreshold = 0.1 // Minimum threshold to detect a step
const stepTimeWindow = 1000 // Time window for step detection in milliseconds
const stepBuffer = array of size stepTimeWindow
int steps = 0

// Define variables for distance measurement with RSSI
const rssiThreshold = −70 // RSSI threshold to consider proximity
const maxDistance = 10 // Maximum distance to consider proximity (in meters)
int bluetoothDistances = 0
int wifiDistances = 0

// Function to calculate the average of the buffer
function calculateAverage(buffer):
  sum = 0
  for each value in buffer:
    sum = sum + value
  return sum / stepTimeWindow

// Loop for reading accelerometer data
while true:
  readAccelerometer() // Read accelerometer data
  lateralAcceleration = acceleration in the lateral direction (left-right)

  // Add lateral acceleration to the buffer
  stepBuffer [current time % stepTimeWindow] = lateralAcceleration

  if current time >= stepTimeWindow:

    averageStep = calculateAverage(stepBuffer)
    if averageStep > minThreshold:
      steps = steps + 1
    shift the stepBuffer by one position

  // Read Bluetooth and WiFi RSSI
  bluetoothSignalStrength = readBluetoothRSSI()
  wifiSignalStrength = readWifiRSSI()

  // Check for proximity based on RSSI
  if bluetoothSignalStrength >= rssiThreshold && bluetoothSignalStrength <= maxDistance:
    bluetoothDistances = bluetoothDistances + 1
  if wifiSignalStrength >= rssiThreshold && wifiSignalStrength <= maxDistance:
    wifiDistances = wifiDistances + 1

  wait(sampling interval) // Wait for some time between readings

// At the end of the measurement, the “steps”, “bluetoothDistances”, and “wifiDistances” variables will contain the respective counts of detected steps, Bluetooth proximities, and WiFi proximities.
In our study, we conducted extensive testing of the three algorithms for step count calculation using accelerometer data. To ensure robustness and consistency, each algorithm was subjected to ten repetitions by each of the participants. The results are visually represented in the following graph, providing a clear comparison of the performance of these algorithms.

5. Discussion

This research has presented and implemented three algorithms into a smart cane to measure the steps of a visually impaired person. These algorithms measure not only the steps, but also the swings and proximities by making use of Wi-Fi RSSI signals and Bluetooth.
From the analysis of Algorithm 1, it was noted that the step counting mechanism was influenced by the lateral movements of the cane, as visually impaired users often sweep the cane left and right to detect obstacles. This motion could lead to an overestimation or underestimation of the step count, as the algorithm did not adequately differentiate between forward steps and lateral cane movements. Furthermore, Algorithm 1 lacked validation for the user’s actual motion direction, whether they were moving forward, backward, or to the side. This led to fluctuating accuracy rates under different walking scenarios, highlighting the need for a more sophisticated algorithm that could discern the intended direction of travel and discriminate between obstacle detection sweeps and the actual steps taken.
In the second algorithm, which measured counts based on the swing detection phenomenon, an overestimation of the swing was acquired in comparison to the real count. This shows that this algorithm is sensitive to the overestimation of the swings under particular conditions.
Considering the third and last algorithm, which involved the step detection phenomenon along with proximity measurement, it was observed that the steps calculated by implementing this algorithm closely matched the real step count. This result shows that detection of steps by making use of the accelerometer provided the most accurate results. On the other hand, proximity assisted in counting the Bluetooth and Wi-Fi proximities.
The results show the effectiveness of Algorithm 3 in accurately detecting steps and also highlights the potential of a smart cane in helping visually challenged people in their daily lives by tracking steps as well as enabling social media interaction.

6. Conclusions

This research has not only presented the design of a smart cane aimed at improving the social media experiences of the visually impaired, but also recognized the essential role of technology in enhancing the personal safety of individuals as they navigate outside their homes. While the original study focused on integrating blind individuals into the digital age and improving independence through social media access, it is paramount to underscore the smart cane’s contribution to personal security.
The multifaceted design of the smart cane encompasses audio–tactile interaction, gesture detection, speech-to-text translation, and cloud connectivity through Bluetooth, which collectively serve to create a safer navigation experience. The addition of proximity sensors, GPS tracking, and emergency alert systems provides users with the confidence to explore their surroundings securely. The software components including Facebook chat API and the advanced step count algorithm are complemented by the device’s voice recognition capabilities, which not only enhance the user interaction with social media, but also bolster the users’ safety by allowing hands-free operation and immediate access to assistance if needed.
Algorithm 3, in particular, demonstrated superior performance in step count accuracy, which is integral to the safety features, as precise step and swing detection are crucial for avoiding obstacles and hazards.
Future work including user evaluations with visually impaired individuals will not only assess the smart cane’s usability and effectiveness in real-world scenarios, but will also prioritize the evaluation of its safety features. Ensuring the practical usability of the smart cane includes a thorough validation of its security and emergency response systems, which are vital for the safety and well-being of its users. By emphasizing personal safety alongside social media enhancement, the smart cane represents a holistic approach to supporting the visually impaired in their quest for a more independent and secure lifestyle.

Author Contributions

Software, hardware, evaluation and data acquisition: M.D.M.; writing—original draft preparation and final version: M.D.M.; writing—review and editing: B.-A.J.M., H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), grant number RGPIN-2019-07169.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gillen, G. Cognitive and Perceptual Rehabilitation: Optimizing Function; Elsevier Health Sciences: Amsterdam, The Netherlands, 2008. [Google Scholar]
  2. Sapp, W. Visual impairment. In International Encyclopedia of Education, 3rd ed.; Peterson, P., Baker Eva, M.B., Eds.; Elsevier: Leawood, KS, USA, 2010; pp. 880–885. [Google Scholar]
  3. Morone, P.; Cuena, E.C.; Kocur, I.; Banatvala, N. Investing in Eye Health: Securing the Support of Decision-Makers; World Health Organization: Geneva, Switzerland, 2012; Available online: http://www.who.int/iris/handle/10665/258521 (accessed on 2 February 2024).
  4. Subramoniam, S.; Osman, K. Smart Phone Assisted Blind Stick. Turk. Online J. Des. Art Commun. 2018, 2613–2621. [Google Scholar] [CrossRef]
  5. Resnick, M.D.; Harris, L.J.; Blum, R.W. The impact of caring and connectedness on adolescent health and well-being. J. Paediatr. Child Health 1993, 29 (Suppl. 1), S3–S9. [Google Scholar] [CrossRef] [PubMed]
  6. Modell, S.J.; Rider, R.A.; Menchetti, B.M. An exploration of the influence of educational placement on the community recreation and leisure patterns of children with developmental disabilities. Percept. Mot. Ski. 1997, 85, 695–704. [Google Scholar] [CrossRef] [PubMed]
  7. Jessup, G.; Bundy, A.C.; Broom, A.; Hancock, N. The social experiences of high school students with visual impairments. J. Vis. Impair. Blind 2017, 111, 5–19. [Google Scholar] [CrossRef]
  8. Folmer, E. Exploring the use of an aerial robot to guide blind runners. ACM Sigaccess Access Comput. 2015, 112, 3–7. [Google Scholar] [CrossRef]
  9. Emre, S.; Huang, Y.; Ramtekkar, U.; Lin, S. Readiness for voice assistants to support healthcare delivery during a health crisis and pandemic. NPJ Digit. Med. 2020, 3, 122. [Google Scholar] [CrossRef]
  10. Wentz, B.; Lazar, J.L. Are separate interfaces inherently unequal? In An Evaluation with Blind Users of the Usability of Two Interfaces for a Social Networking Platform; iConference: Seattle, WA, USA, 2011. [Google Scholar]
  11. Helal, A.; Moore, S.; Ramachandran, B. Drishti: An integrated navigation system for visually impaired and disabled. In Proceedings of the Proceedings Fifth International Symposium on Wearable Computers, Zurich, Switzerland, 8–9 October 2001; pp. 149–156. [Google Scholar]
  12. Harrison, J.; Lucas, A.; Cunningham, J.; McPherson, A.P.; Schroeder, F. Exploring the Opportunities of Haptic Technology in the Practice of Visually Impaired and Blind Sound Creatives. Arts 2023, 12, 154. [Google Scholar] [CrossRef]
  13. Wu, G.; Pan, C. Audience engagement with news on Chinese social media: A discourse analysis of the People’s Daily official account on WeChat. Discourse Commun. 2022, 16, 129–145. [Google Scholar] [CrossRef]
  14. Wu, S.; Adamic, L.A. Visually impaired users on an online social network. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; ACM: New York, NY, USA, 2014; pp. 3133–3142. [Google Scholar]
  15. Boyd, D. Why youth <3 social network sites: The role of networked publics in teenage social life. In Youth, Identity, and Digital Media; Buckingham, D., Ed.; MIT Press: Cambridge, MA, USA, 2008; pp. 119–142. [Google Scholar]
  16. Burke, M.; Marlow, C.; Lento, T. Social Network Activity and Social Well-Being. In Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, Atlanta, GA, USA, 10–15 April 2010. [Google Scholar]
  17. Sahoo, N.; Lin, H.-W.; Chang, Y.-H. Design and Implementation of a Walking Stick Aid for Visually Challenged People. Sensors 2019, 19, 130. [Google Scholar] [CrossRef]
  18. Mohit, J.; Diwakar, N.; Swaminathan, M. Smartphone usage by expert blind users. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–15. [Google Scholar] [CrossRef]
  19. Cole, G.; Carrington, P.; Cassidy, C.; Morris, M.R.; Kitani, K.M.; Bigham, J.P. “It’s almost like they’re trying to hide it”: How User-Provided Image Descriptions Have Failed to Make Twitter Accessible. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 549–559. [Google Scholar] [CrossRef]
  20. Khan, A.; Khusro, S. A mechanism for blind-friendly user interface adaptation of mobile apps: A case study for improving the user experience of the blind people. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 2841–2871. [Google Scholar] [CrossRef]
  21. Bigham, J.P.; Jayant, C.; Ji, H.; Little, G.; Miller, A.; Miller, R.C.; Miller, R.; Tatarowicz, A.; White, B.; White, S.; et al. VizWiz: Nearly Real-Time Answers to Vvisual Questions. In Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 3–6 October 2010. [Google Scholar]
  22. Kim, S.; Park, E.-S.; Ryu, E.-S. Multimedia vision for the visually impaired through 2D multiarray braille display. Appl. Sci. 2019, 9, 878. [Google Scholar] [CrossRef]
  23. Summer, A.; Mooney, B.; Ahmad, I.; Huber, M.; Clark, A. Object detection and sensory feedback techniques in building smart cane for the visually impaired: An overview. In Proceedings of the 13th ACM International Conference on Pervasive Technologies Related to Assistive Environments, Corfu, Greece, 30 June–3 July 2020; pp. 1–7. [Google Scholar] [CrossRef]
  24. Folego, G.; Costa, F.; Costa, B.; Godoy, A.; Pita, L. Pay Voice: Point of Sale Recognition for Visually Impaired People. arXiv 2018, arXiv:1812.05740. [Google Scholar]
  25. Kuber, R.; Hastings, A.; Tretter, M. Determining the accessibility of mobile screen readers for blind users. In Proceedings of the IASTED Conference on Human-Computer Interaction, Baltimore, MD, USA, 14–16 May 2012; pp. 182–189. [Google Scholar]
  26. Smith-Jackson, T.; Nussbaum, M.; Mooney, A. Accessible cell phone design: Development and application of a needs analysis framework. Disabil. Rehabil. 2003, 25, 549–560. [Google Scholar] [CrossRef] [PubMed]
  27. Singh, S.; Jatana, N.; Goel, V. HELF (Haptic Encoded Language Framework): A digital script for deaf-blind and visually impaired. Univers. Access Inf. Soc. 2021, 22, 121–131. [Google Scholar] [CrossRef] [PubMed]
  28. Leiva, K.M.R.; Jaén-Vargas, M.; Codina, B.; Olmedo, J.J.S. Inertial Measurement Unit Sensors in Assistive Technologies for Visually Impaired People, a Review. Sensors 2021, 21, 4767. [Google Scholar] [CrossRef] [PubMed]
  29. Shull, P.B.; Damian, D.D. Haptic wearables as sensory replacement, sensory augmentation and trainer—A review. J. Neuroeng. Rehabil. 2015, 12, 59. [Google Scholar] [CrossRef] [PubMed]
  30. Joseph, A.M.; Kian, A.; Begg, R. State-of-the-Art Review on Wearable Obstacle Detection Systems Developed for Assistive Technologies and Footwear. Sensors 2023, 23, 2802. [Google Scholar] [CrossRef] [PubMed]
  31. Messaoudi, M.D.; Menelas, B.A.J.; Mcheick, H. Review of navigation assistive tools and technologies for the visually impaired. Sensors 2022, 22, 7888. [Google Scholar] [CrossRef] [PubMed]
  32. Messaoudi, M.D.; Menelas, B.A.J.; Mcheick, H. Autonomous smart white cane navigation system for indoor usage. Technologies 2020, 8, 37. [Google Scholar] [CrossRef]
Figure 1. Layered architectural model for the smart cane.
Figure 1. Layered architectural model for the smart cane.
Iot 05 00009 g001
Figure 2. Smart cane interconnections and working.
Figure 2. Smart cane interconnections and working.
Iot 05 00009 g002
Figure 3. Proposed Smart Cane.
Figure 3. Proposed Smart Cane.
Iot 05 00009 g003
Figure 4. Graphical representation of the implementation of Algorithm 1.
Figure 4. Graphical representation of the implementation of Algorithm 1.
Iot 05 00009 g004
Figure 5. Graphical representation of the implementation of Algorithm 2.
Figure 5. Graphical representation of the implementation of Algorithm 2.
Iot 05 00009 g005
Figure 6. Graphical representation of the implementation of Algorithm 3.
Figure 6. Graphical representation of the implementation of Algorithm 3.
Iot 05 00009 g006
Table 1. Results obtained from the implementation of Algorithm 1.
Table 1. Results obtained from the implementation of Algorithm 1.
Test NumberCane-Calculated Step CountReal Step Count
115
225
3410
4310
5615
6615
7820
81120
91425
101425
Table 2. Results obtained from the implementation of Algorithm 2.
Table 2. Results obtained from the implementation of Algorithm 2.
Test NumberCane-Calculated Step CountReal Step Count
165
255
31210
41510
51615
61915
72220
82020
92625
102825
Table 3. Results obtained from the implementation of Algorithm 3.
Table 3. Results obtained from the implementation of Algorithm 3.
Test NumberCane-Calculated Step CountReal Step Count
155
255
31110
41010
51415
61715
72220
82120
92625
102525
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Messaoudi, M.D.; Menelas, B.-A.J.; Mcheick, H. Integration of Smart Cane with Social Media: Design of a New Step Counter Algorithm for Cane. IoT 2024, 5, 168-186. https://doi.org/10.3390/iot5010009

AMA Style

Messaoudi MD, Menelas B-AJ, Mcheick H. Integration of Smart Cane with Social Media: Design of a New Step Counter Algorithm for Cane. IoT. 2024; 5(1):168-186. https://doi.org/10.3390/iot5010009

Chicago/Turabian Style

Messaoudi, Mohamed Dhiaeddine, Bob-Antoine J. Menelas, and Hamid Mcheick. 2024. "Integration of Smart Cane with Social Media: Design of a New Step Counter Algorithm for Cane" IoT 5, no. 1: 168-186. https://doi.org/10.3390/iot5010009

APA Style

Messaoudi, M. D., Menelas, B. -A. J., & Mcheick, H. (2024). Integration of Smart Cane with Social Media: Design of a New Step Counter Algorithm for Cane. IoT, 5(1), 168-186. https://doi.org/10.3390/iot5010009

Article Metrics

Back to TopTop