Next Article in Journal
Bridge Crack Inspection Efficiency of an Unmanned Aerial Vehicle System with a Laser Ranging Module
Previous Article in Journal
Real-Time Temperature Monitoring under Thermal Cycling Loading with Optical Fiber Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Design of a “Cobot Tactile Display” for Accessing Virtual Diagrams by Blind and Visually Impaired Users

Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA 23298, USA
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(12), 4468; https://doi.org/10.3390/s22124468
Submission received: 7 April 2022 / Revised: 8 June 2022 / Accepted: 10 June 2022 / Published: 13 June 2022
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Access to graphical information plays a very significant role in today’s world. Access to this information can be particularly limiting for individuals who are blind or visually impaired (BVIs). In this work, we present the design of a low-cost, mobile tactile display that also provides robotic assistance/guidance using haptic virtual fixtures in a shared control paradigm to aid in tactile diagram exploration. This work is part of a larger project intended to improve the ability of BVI users to explore tactile graphics on refreshable displays (particularly exploration time and cognitive load) through the use of robotic assistance/guidance. The particular focus of this paper is to share information related to the design and development of an affordable and compact device that may serve as a solution towards this overall goal. The proposed system uses a small omni-wheeled robot base to allow for smooth and unlimited movements in the 2D plane. Sufficient position and orientation accuracy is obtained by using a low-cost dead reckoning approach that combines data from an optical mouse sensor and inertial measurement unit. A low-cost force-sensing system and an admittance control model are used to allow shared control between the Cobot and the user, with the addition of guidance/virtual fixtures to aid in diagram exploration. Preliminary semi-structured interviews, with four blind or visually impaired participants who were allowed to use the Cobot, found that the system was easy to use and potentially useful for exploring virtual diagrams tactually.

1. Introduction

Graphic visual representations are increasingly used as the sole means to communicate a wide range of information at work, in school, and for daily living. Unfortunately, individuals who are blind and visually impaired (BVIs) currently have very limited access to the information in these graphics. Physical tactile graphics are the most common alternative used by BVIs to access this information, but they are often not available, are time-consuming to make, are expensive, bulky, and exhibit wear and tear relatively quickly. They can also be cumbersome in dynamic environments, such as when analyzing data or surfing the web, where rapid access to a large number of diagrams may be needed. However, BVIs’ independent access to information in diagrams is important to ensure their autonomy and provide equal opportunities for their advancement in education and employment. New methods are needed to address the problems of physical tactile graphics. In this paper, we focus on the design and development of a display device that focuses on a low-cost, rapidly refreshable method to display diagram information that is neither bulky nor suffers from wear and tear.
The use of alternate text descriptions is one such method that could address these issues. However, this method can often have difficulty communicating spatial concepts that are often key for understanding fields such as science and engineering (e.g., what is a sine wave, and what is meant by its phase). In addition, the ability for a BVI user to independently discover spatial patterns and relationships in graphical information is often lost as this step is part of the process of creating a summary word description (as, for example, the alternative of listing the raw locations of 100 data points would not be practical or easily comprehended). These concepts and tasks may be an essential part of a desired job or required school class. Finally, word descriptions can also be difficult to formulate for unfamiliar objects, especially for young children for whom the descriptions would need to fit within their limited vocabulary. It is unclear how one could explain to a child the concept of “above” without a physical demonstration or diagram.
Refreshable tactile displays/surfaces [1,2,3,4,5,6] have been proposed as alternatives, which allow for quick, dynamic access to electronic/virtual tactile diagram representations. At one end of the spectrum are tactile pin displays made from a large matrix of pins; however, these displays are very expensive despite covering much smaller areas compared to typical paper tactile diagrams and having limited spatial resolution (cost is an important consideration as most BVIs live below the poverty line). At the other end of the spectrum are surface haptic methods (e.g., vibration feedback on tablets). However, surfaces with vibration feedback are also limited in size (due to vibration strength) and provide only a single point of contact with a diagram (in contrast to the whole hand). The size of a display is a significant concern as even for those diagrams that have sufficient spatial resolution in their scaled-down form, reducing the size of the diagram has a negative effect on user performance [7]. The use of electrovibration could resolve the issue of size but is still restricted to a single point of contact with a diagram. Electrostatic displays can provide multipoint contact with a diagram, such as with the display created with specially instrumented gloves interacting with an LCD monitor [5]. However, this method cannot currently provide spatially distributed information within a finger, which is also important.
Small, mobile, refreshable pin displays with kinesthetic tracking [8,9] are more cost-effective than large matrix pin displays while enabling large workspaces through the movement of the hand/device across the virtual surface of the diagram. The motion of the hand with the pin display also allows these displays to resolve finer spatial details through temporal coding. The use of multipoint contact on the same and/or multiple fingers also significantly improves user performance compared to single point contact [10,11] and there can be no limitation in size as with surface displays. However, although a significant improvement in the ability to identify objects portrayed in diagrams occurred with multiple contacts, the time taken for a BVI user to explore the graphic was still much greater than for a physical tactile diagram [11]. This is problematic as it further exacerbates task completion time differences between sighted and blind users that exist due to differences in the information processing capacities of vision and touch.
Therefore, it is important to consider how the time needed to explore and understand a tactile diagram can be decreased while maintaining the aspects of previous small, refreshable pin display device designs that resulted in improved accuracy [8,11]. The main difficulty in using these types of displays [11,12] appears to be the inefficiency in which users track edges and lines. Simplifying diagrams so that a straight-line approximation is used for all edges and lines can make tracking edges/lines easier and improve performance [12]. However, this type of simplification is not always appropriate as the curvature of the edges and lines may be a key component of the information being relayed. It is also not clear how significant an improvement in response time occurs [12]. An alternate possibility is to provide guidance to the BVI user in the exploration process. Some researchers (e.g., [13,14]) proposed using the tactile display itself to provide symbols for navigational guidance, although this precludes providing information on the actual contact interaction with the diagram and is expected to be more cognitively demanding then someone (or something) guiding the hand physically.
In contrast, the use of guidance/virtual fixtures on haptic/robot force feedback devices (e.g., Phantom Omni) have been used for exploring simple line graphs and scatter plots, with both response time and accuracy better than with free exploration [15]. However, the algorithms have not been investigated for more complex diagrams or the potential benefits of integration with refreshable pin displays. In addition, the devices used have relatively small workspaces, are typically bulky, vary in cost from expensive to extremely expensive and, needlessly for 2D diagrams, operate in 3D space. A potentially better possibility would be a small, omnidirectional mobile robot, which could be small and lightweight, have a potentially arbitrarily large 2D workspace independent of the robot size, and be low in cost. Although a previous prototype was developed within our lab [16], it was bulky, had significant problems with position accuracy (critical for tactile diagram interpretation), and had not implemented any guidance/virtual fixtures on the device (including the needed measurement of the applied force by the user to allow shared control).
The focus of this paper is first to describe the development of a testbed device that combines a multifingered tactile pin display with a small and affordable mobile Cobot that can provide robotic assistance/guidance using haptic virtual fixtures in a shared control paradigm in a large workspace. This includes the development of a method to obtain sufficient position accuracy for the Cobot and tactile pins and the implementation of guidance/virtual fixtures that incorporates the user’s applied force. Second, the device is verified to have the functionality intended. In particular that: (1) sufficient position accuracy is achieved (this has been previously a problem for tactile mice and cofounded the interpretation of their use in accessing virtual diagrams), and (2) shared control with BVI users does not produce any unexpected outcomes (e.g., unexpected movement trajectories). The latter, as well as the usability of the device, is obtained through a preliminary user study and structured interviews with BVI users.

2. Previous Work

The haptic devices previously used to provide guidance and virtual fixtures for exploring tactile diagrams and teaching motor tasks, such as learning to write, have several limitations: they have relatively small workspaces, are typically bulky, vary in cost from expensive to extremely expensive and, needlessly for 2D diagrams, operate in 3D space. Furthermore, they are impedance type devices, which are usually low in inertia and are backdrivable. Unfortunately, being backdrivable force-source actuators can result in unexpected and undesirable movements of the devices [17]. It also does not easily allow shared control of the robot’s movement with the user. An admittance-type device would be a better choice as it does not suffer from these problems. One possible alternative is an admittance control device functioning similar to a 2D plotter. However, a 2D plotter design would be large and heavy for even a moderately sized workspace, as well as lack smooth motions in nonperpendicular directions. A possibly better admittance control device is a small, omni-wheeled mobile robot: this could be small and lightweight, have a potentially arbitrarily large workspace independent of the robot size, normally use admittance control, and achieve smooth motion in arbitrary directions due to the use of omni wheels.
A prototype of an omni-wheeled mobile robot, for this purpose, was previously developed in our laboratory [16]. The developed prototype allowed for a commanded input force to be decomposed into three distinct velocities using an admittance control model to produce 2D omnidirectional movements of various magnitudes. Unfortunately, there were some significant design issues with the protype. The physical dimensions of the device, including the main processor and omnidirectional wheels, were large and bulky, making the device hard for a user to maneuver with one hand. In addition, a method to sense the force applied by the user was not incorporated into the device and used to provide shared control of the robot. However, the most significant problem was that measurement of the x, y, and θ position of the robot (needed to determine its location in the virtual diagram) was very poor. This was because the position was measured at the rotary shaft encoders of the omnidirectional wheels, which differed significantly from the actual location of the mobile robot due to wheel slippage. This is problematic as wheel slippage is an inherent characteristic of omnidirectional wheels [18]. These design issues need to be resolved while keeping the cost of the device low, as most BVIs live below the poverty line.
Previous mobile tactile displays used a variety of low-cost methods to determine the position of the device. Many displays (e.g., the VT Player) used a mouse sensor to obtain position information. However, the x, y position information provided by these sensors is highly inaccurate for determining the haptic position of a device with respect to some starting point [19] and no angular orientation is given. The latter is important for mapping the pin locations of the tactile display onto the virtual diagram when only the device (i.e., sensor) location is measured. The alternative would be to keep track of the location of all the individual pins separately (with 16 pins in total).
Both graphics tablets with a 2D radio-frequency coupler on the tactile device and touchscreen displays have been used for more accurate haptic x, y position measurements (e.g., [19,20]), but they still do not provide angular position and also introduce a restriction on the size of the active area. A size restriction is particularly problematic for large diagrams such as maps and blueprints, where spatial continuity is important to facilitate the already difficult task of exploring these diagrams to determine spatial relationships (such as making short cuts from one point to another).
An alternative is to consider newer dead reckoning methods used in mobile robot applications. Bonarini et al. [21] showed that the problem of cumulative errors when using encoders on the device wheels for dead reckoning can be overcome using two optical mice attached at the bottom of a robot to accurately calculate robot pose. An alternate sensor fusion approach was successfully used in [22], where data from an optical mouse sensor, the yaw angle calculated from IMU data, and the wheel encoder data were combined using an extended Kalman filter to accurately estimate the position of a two-wheeled robot. Both methods showed promising results and were initially considered as potential position-sensing systems for our device. However, given the size and cost constraints of our device, these were not selected for implementation. This is because implementation of the first method requires a significant increase in the size of our device over other methods, due to the need to separate the two optical mice by a distance and still have them attached to the robot base. Although implementation of the second method [22] requires the use of only a single mouse sensor, the information gathered from this sensor needs to be combined with information from both an IMU and wheel encoders, resulting in increased system cost. Moreover, this method also requires that the mouse sensor be placed on the robot’s edge, resulting in increased size and complexity.
Here, we will consider a sensor fusion approach using a low-cost dead reckoning method that addresses these issues by using a single optical mouse sensor and a commercially available IMU. In addition, neither needs to be placed away from the center of the robot base. The displacement data provided by the optical mouse sensor is combined with the orientation angle (θ) from an IMU to determine the precise x, y location of the device, as well as its orientation.
In considering the form of the tactile feedback that should be provided to the BVI user, it should be noted that BVIs use their whole hand (and sometimes both hands) in exploring a tactile graphic [23]. This consists of spatially distributed information within a finger pad and across multiple fingers. Previous mobile tactile displays have used single-element vibration feedback for one finger (by vibrating the entire device, such as a touchscreen display, e.g., [24]) or multiple fingers (using finger-mounted devices that could freely move independent of each other [11,25]), or spatially distributed tactile feedback (pin matrix) on single or multiple finger pads a fixed distance apart [19,20].
As people use both spatially distributed information within a finger pad and across fingers when directly interacting with the environment, including both components seems warranted. The question is what is an appropriate number of tactile elements to have per finger pad and how many finger pads should be used. If we would solely consider the tactile resolution of the finger pads for determining the number of elements per finger pad, a 400-pin array would be required for each finger [26]. However, Weisenberger [10] showed that although increasing the number of tactile elements providing feedback to a finger improves performance, the largest improvement was between one and four elements. This suggests a more tractable number of pins for each finger on a mobile device. In terms of how many fingers to provide feedback for, Burch and Pawluk [11] showed that providing feedback to multiple fingers versus a single finger also significantly improved performance when textured diagrams (the most recommended tactile diagram format) are used. The most significant increase occurred between one and two fingers [25], which suggests that feedback for only two fingers is needed. The study also found that BVI users usually kept their fingers a comfortable distance apart and did not appear to change the distance between them (although they could), suggesting that a fixed distance with the fingers in a natural pose would work best. We will consider the use of pin displays for two fingers on the exploring hand of at least four elements each.

3. Materials and Methods

3.1. System Overview

The Cobotic tactile display acts as a refreshable display that both portrays a virtual diagram represented in electronic form and guides its haptic exploration. The system is made up of four functional blocks: the main processor, the mobile robot base (robot block), the cover shell containing the components which directly interact with the user (sensor cap block), and the higher-level software representing the virtual diagram and providing the shared control/virtual fixtures and tactile feedback signals (algorithm block). The interaction between the blocks, as well as their main components, is given in Figure 1. Figure 2 shows the system’s current prototype.
To haptically explore the virtual diagram, the user grasps the cover shell surface with one of their hands (with their index and middle fingers resting on the tactile pin display) and applies a planar force vector according to their desired exploration intentions. The entire force applied by the user is measured by a two-axis force-sensing system that connects the shell to the robot base. The applied force, together with the device’s planar position measured on the robot base, is sent to an off-device computer. The off-device computer then uses these measurements combined with the algorithm block to determine where the user is currently located in the virtual diagram. From the diagram location, the algorithm block provides the signals to control the tactile pin displays (implementing the tactile feedback) and the movement of the robot base (implementing the guidance/virtual fixtures). These signals are then sent to the main processor of the device to control the individual pins on the tactile display and the omni drive of the mobile robot base.

3.2. Main Processor

The main processor is a commercially available Arduino board (Mega 2560) that acts as the central component that controls device behavior and the primary communicator between different hardware components. The Arduino board is responsible for collecting and relaying inputs from the device (measured planar force and position) to the algorithm block on the off-device computer. It also relays the desired mobile robot commands (base velocity) and movement of each of the pins (on/off, vibration frequency) in the tactile pin display from the algorithm block, as well as controls the timing of all the outputs.

3.3. Robot Block

The robot block consists of an omni-drive system, which allows the mobile robot base to move directly in any planar direction, and a position-sensing system that measures the planar position of the center of the omni-drive platform and to which it is rigidly attached (Figure 3). The key requirements of this block include a cost-effective miniature design allowing for smooth and precise mechanical movement, and accurate position localization (x, y, and θ), in the 2D plane with respect to a start location. It is also desirable to have a large planar workspace (ideally large enough to display standard-sized (24″ × 36″) architectural blueprints of buildings (for navigation planning).
Additionally, a push button switch is housed on the side of the omni-drive platform to be used as a homing button, which moves the device to the heart of the diagram.

3.3.1. Omni-Drive System

The omni-drive system is a motorized platform consisting of three servo motors (Hitec multiplex: HSR-2645CR) positioned concentrically 120° apart around the vertical axis of the base. Each servo motor is attached to a 38mm plastic omni wheel (Nexus: RB-Nex-136). Each wheel consists of small spinners placed around the radius of the main wheel frame and oriented orthogonally to the wheel’s axis of rotation. Due to this setup, the wheels can achieve smooth omnidirectional movements in a plane (x, y, θ). Using this design, the device can be moved linearly in the xy-plane and rotated around the z-axis freely.
The desired movement of the robot base is determined in the algorithm block, which provides the overall desired velocity vector for the robot base for that instant in time   V h , n e t . This velocity is determined from the admittance control equation (Section 3.5.2), which has inputs from the guidance algorithm and the force applied by the user. The omni-drive system then takes the net velocity V h , n e t and decomposes it into three velocity components V h , 1 , V h , 2 , and V h , 3 [27], one for each wheel. Each omni wheel comprises the main wheel frame and small free-rolling spinners oriented at 90° with respect to the main wheel frame. Therefore, the total linear velocity V h , i of each omnidirectional wheel can be represented as:
V h , i =       V i 2 + V i ,   spinner 2
where V i is the linear velocity of the wheel’s mainframe, V i ,   spinner is the linear velocity of the free-rolling spinners, and V h , i is the total linear velocity of each omnidirectional wheel. The linear velocity of the main wheel frame, V i , can be related to the angular velocity, θ ˙ , of its corresponding servo motor by:
V i = r × θ ˙ i
where r is the radius of the main omni-wheel frame. Using omnidirectional kinematics modeling it can be shown that:
θ ˙ 1 = V 1 r = 1 r [ V h x ]
θ ˙ 2 = V 2 r = 1 r [ 1 2 V h x + 3 2 V h y ]
θ ˙ 3 = V 3 r = 1 r [ 1 2 V h x 3 2 V h y ]
where V h x and V h y are the overall linear velocity of the robot base in the x and y directions as shown in Figure 3.
The HSRS-2645CR servo motors are controlled by commanded speed directly.

3.3.2. Position-Sensing System

The position-sensing system, which measures the Cobot’s planar position, plays a very critical role in the overall performance of the Cobot: reasonable precision of these measurements is needed to accurately depict the tactile diagram. The system consists of an optical mouse sensor (Pixart Imaging: PAW3515DB) coupled with a commercially available IMU (Sparkfun Electronics: MPU9250) to determine the precise location and orientation information of the device. The optical mouse sensor is coupled underneath the moving robot base using hanging screws to measure the movement of the floor with respect to the moving robot (Figure 4). The IMU is attached to the robot base as well (Figure 4).
The PAW3515DB optical mouse sensor is a low-cost 2D motion sensor that uses sequences of images captured at a frame rate of 3300 frames/s to estimate the relative motion between the sensor and the environment. Due to its high resolution of up to 1600 dots per inch, this sensor can provide accurate displacement along the x-axis (∆x) and y-axis (∆y) of the 2D coordinate system. We used a resolution of 1000 dots per inch. On the robot base, the xy-plane of the optical mouse sensor is aligned with the xy-plane of the omni-drive system (i.e., the device movement along the x-axis or y-axis corresponds to optical mouse sensor displacement along these axes as well). The displacement data (number of dots moved) from the optical mouse sensor is processed by the main processor to update the current position of the Cobot.
The MPU9250 is a 9-axis MEMS system that consists of two chips: the MPU6500, consisting of a 3-axis gyroscope and 3-axis accelerometer, and the AK8963, containing a 3-axis magnetometer. The MPU6500 provides linear acceleration and angular velocity information in the x, y, and z directions, whereas the AK8963 provides magnetic field information in these three directions. The z-axis of the omni-drive system is aligned with the z-axis of the IMU (i.e., in parallel with the gravitational acceleration). The angular velocity information from the gyroscope can be used to calculate the orientation angle ( θ g y r o ) using the following equation:
θ g y r o ( t ) = 0 t ω z ( τ ) d τ
where ω z ( . ) denotes the angular velocity along the z-axis. However, this is prone to long-term (low-frequency) integration errors due to the inherent bias in the gyroscope output.
The magnetometer can also be used to measure the orientation angle ( θ m a g ), using the following relation:
θ m a g ( t ) = [ m x ( t ) m y ( t ) ]
where m x ( t ) and m y ( t )   denote the earth’s magnetic field strength along the x- and y-axes. In contrast to the gyroscope measurement, this orientation angle is prone to short-term (high-frequency) errors due to hard-iron and soft-iron distortions in the magnetic field signals.
To resolve the limitations of the two methods for measuring the orientation angle, a complementary filter is used. The filter proposed has the advantage of being simpler, faster and requires less computational power than using a Kalman filter. The idea is that in the short term, we use θ g y r o because it is very precise and not susceptible to external forces. In the long term, we use θ m a g   because it is not affected by bias errors. This idea can be formulated into the following equation to calculate Cobot’s orientation angle (θ):
θ k + 1 = T T + Δ T ( θ k + Δ T ω z k ) + Δ T T + Δ T ( θ m a g k )
where T indicates the desired time constant, Δ T indicates the sampling interval, and { k , k + 1} denotes the time-indices of the discrete signals. Finally, the following equation was used to calculate the orientation angle θ:
θ k + 1 = 0.98 ( θ k + Δ T ω z k ) + 0.02 ( θ m a g k )
The orientation angle (θ) from the IMU calculation is used to adjust the displacement data ( Δ x and Δ y ) provided by the optical mouse sensor to obtain true displacement ( Δ x t r u e and Δ y t r u e ) using the following equations:
Δ x t r u e = Δ x × c o s θ Δ y × s i n θ
Δ y t r u e = Δ x × s i n θ + Δ y × c o s θ
Although wheel slippage may occur, this is accounted for in the algorithm block by using the true position measured with the position-sensing system rather than the rotational angle of the motor.

3.4. Sensor Cap Block

The sensor cap block (Figure 5) consists of a mechanical shell which is mounted as a cap on the force-sensing system (existing between the mobile robot base and the cap) and otherwise “floats” above the robot base. The shell provides a place for the user to hold the device and apply force, as well as to obtain tactile feedback through two electronic Braille cells mounted within the cap. The key requirements for the sensor cap block are that it is easy and comfortable to hold while the device moves, measures a reasonable approximation of the applied force by the user in the xy-plane, and provides tactile feedback to the user’s fingertips based on their location in the virtual diagram.

3.4.1. Shell

The current prototype uses the top surface of a computer mouse due to its ergonomic design that makes it easy for a user to grasp and naturally rest their index and middle fingers on the top (where the tactile displays are located).

3.4.2. Force-Sensing System

The force-sensing system uses a MicroJoystick (Interlink Electronics: 54-24451) to measure the applied force. It is essentially an isometric joystick that preserves an essentially fixed position relationship between the user’s hand and the robot base when force is applied to the stick. Four pressure-sensitive force-sensing resistors ( F S R s ) at its base measure the applied force’s magnitude. As the FSRs are each placed in a pattern corresponding to the cardinal directions ( F S R E a s t , F S R W e s t , F S R N o r t h , and F S R S o u t h ), with the output of each of these F S R s measured with respect to a common ground, they can be combined to approximate the vector force in the xy-plane (not including a scaling factor):
F x = F S R E a s t F S R W e s t  
F y = F S R N o r t h F S R S o u t h
Applied   force   F i n = F x + F y 2   tan 1 F y F x
The precise scaling factor between the vector force   F i n calculated and the actual force is not needed as the algorithm using this information further applies an additional scaling factor that is to be tuned (Section 3.5.2).

3.4.3. Tactile Display

The tactile display consists of two piezoelectric Braille cells (Metec AG: Braille cell P16), one for the index finger and one for the middle finger, mounted with the Braille cell pins (each a 2 × 4 pin array) extruding from the shell surface. The design follows that previously used for a passive, mobile tactile pin display extended to two Braille cells [20]. Each tactile pin is directly driven by turning power on and off through a corresponding solid-state single-pole double-throw relay (IXYS Integrated Circuits Division: LCC120). Each relay is directly controlled by an individual output line of the main processor, with all pins able to be controlled in parallel. The main processor is able to generate square waves of varying frequency (0–200 Hz) and duty cycle (0–100%) on each pin to create different vibration effects that produce a feeling similar to texture. The texture information is not intended to be interpreted as a particular texture, but instead used to help separate objects into their constitutive parts and indicate part orientation. Different textures indicate different parts and added “stripes” indicate orientation [11]. The texture to be produced by each pin at a given point in time is provided by the algorithm block based on the location of the pin within a textured virtual tactile diagram [11].

3.5. Algorithm Block

The algorithm block contains the representation of the virtual tactile diagram, which provides spatially coded information about the tactile (vibration) feedback and virtual haptic fixtures to be used. This information is used, along with the position and force measured by the device, to generate the signals controlling the individual pins on the tactile display and the omni-drive system. This block is implemented on a personal computer using MATLAB®.

3.5.1. Spatial Tactile Feedback

For the tactile display, a tactile diagram is represented virtually as a colored image, where different colors correspond to the different frequencies used to generate textures and edges. The tactile diagram is translated from a visual diagram offline, using colors to separate parts of objects and represent their differing orientation, as this method was previously found to significantly improve performance [11]. The tactile feedback algorithm uses the current position of the center of the mobile base (x, y, θ), the geometry relating the center of the base to the location of an individual pin, and the scaling factor between the physical workspace and virtual diagram to determine the appropriate location of each pin in the virtual diagram. The vibration signal is then generated based on the color code of the corresponding diagram point. The use of θ as part of the calculation provides high position accuracy. In previous pin displays, users were required to keep the device vertical, distracting them from the primary task of exploring the diagram.

3.5.2. Shared Control of Movement

To provide robotic guidance to our BVI users, guidance virtual fixtures were implemented along the edges of objects and object parts in the virtual diagram where we expect the most salient information about the object to be. Guidance fixtures facilitate movement in a preferred direction along the edges, which should make it easier for users to track edges and find other highly salient information to interpret the diagram. These guidance fixtures are represented in the computer as edges of the objects and object parts.
Control of the movement of the device is shared between the guidance virtual fixtures and the user. Although impedance control is often used with haptic feedback, being backdrivable force-source actuators, these systems are prone to instability. This can produce unexpected, undesirable movements when interacting with guidance/virtual fixture algorithms [17]. Instead, the system uses admittance control, which exhibits better behavior. An admittance controller uses the measured input force, provided by the user, along with an admittance variable, α, to produce the desired velocity, V h , n e t , of the device, which is provided to the robot base.
V h , n e t = α F
In absence of any virtual fixtures, F = F i n (i.e., the force applied by the user to the device in the xy-plane). When a virtual fixture is present, the input force, F i n , from the user is decomposed (Figure 5a) into the force applied in the desired direction, F d , and the undesired direction, F τ , (orthogonal to the desired direction) based on the position of the virtual fixtures on the virtual tactile diagram. The relative contribution of each of these forces is weighted using an admittance value, k τ   ϵ   [ 0   1 ] , to allow different relations in sharing control between the user and the fixture.
V h , n e t = α ( F d + k τ F τ )  
If the admittance value is set to zero, a virtual fixture completely constrains the motion to the preferred direction only (in our case, moving towards the closest edge). If the admittance value is set to one, the compliance of movement is equal in all directions, allowing the user to freely move. For a case when the admittance value is set to values between 0 and 1, the virtual fixture responds to the user input force and direction by encouraging desired motions along the path; however, it also allows a user to “break free” and explore other areas of the virtual diagram. This balance is expected to be important, as although our goal is to improve the ability of user’s to track edges, active, free exploration is considered an important aspect of haptic perception [28]. This suggests that an admittance value somewhere between 0 and 1 may be the best choice to obtain both objectives.
In Figure 6, we use different admittance values between (0 k τ 1 ) for two different basic geometric shapes. This method first determines a shortest deviation vector, s , as a vector originating from the device’s current position, p c , and pointing towards the closest virtual fixture (i.e., edge) point, p v , on the virtual diagram.
s = p v p c
Thereafter, the desired direction ( D ) is calculated as the sum of two vectors: vector s and vector t . Vector t points in the direction of the tangent to the virtual fixture/edge at the closest point on the diagram.
D = t + k s s
d = D D
where k s is the stiffness factor for the virtual fixture which determines the contribution of the tangent and the shortest deviation to the desired direction D . d indicates a unit vector in the desired direction. Lastly, the desired force F d is calculated by projecting the input force F i n onto the desired direction d . The nondesired force F τ is calculated as the difference between the input force F i n and the desired force F d .
F d = F i n × d
F τ = F i n F d
If the user wanders away from the virtual diagram, the homing button can be used to move the Cobot to the nearest point on the diagram. Under this scenario, the shortest vector s (Equation (17)) is used to calculate the device-desired velocity such that ( V h , n e t = α s ) .
V h , n e t = α ( s )  

3.6. BVI User Study Protocol

One-on-one semi-structured interviews were conducted with BVIs to obtain feedback as to the ease of use and utility of the developed device for exploring tactile diagrams. There were four participants; three of them had light sensitivity only whereas one had low vision (and was blindfolded when interacting with the device). In the interview session, the experimenter first explained the purpose of the device, the concept of a Cobot and shared control, and how to interact with the provided prototype. The experimenter then guided a participant’s hand in exploring the Cobot and work area. When the participant was comfortable with the device, they were given time to use it to interact with virtual tactile diagrams.
Each participant was allowed to use the Cobot to explore five different (in random order) 2D geometric shapes (Figure 7). Geometric shapes were chosen due to their simplicity and the frequency that they are used by BVI students. These virtual shapes were implemented using MATLAB such that their maximum span was 4 inches (i.e., 10.16 cm). The unit of measurement for these virtual shapes was set as ‘points’ (one inch was represented using 72 points), whereas one inch of mouse sensor movement was represented by 1000 dots. Therefore, a distance mapping factor of 0.072 (72/1000) was required. The following equation was used for the mapping:
V d = S c × 0.072 × C d
where V d represents distance travelled on the virtual diagram, C d represents the distance travelled by the Cobot (mouse sensor data), and S c represents the scaling constant. The scaling factor S c is used to allow a diagram to span the available workspace within the user’s reach. During this experimentation a scaling factor of 0.5 was used (i.e., 2 inches of movement of the Cobot corresponded to one inch of movement on the virtual diagram).
The starting planar position (xy-plane) of the Cobot was set relative to the virtual shape and was always selected to be outside the diagram. At the starting point, the user’s hand is assumed to be in neutral position, which will define the orientation of the x- and y-axes; therefore; the initial rotation of the Cobot (θ) relative to the virtual diagram was set to zero. The algorithm for the admittance controller used three separate admittance values ( k τ = 0, k τ = 0.5, and k τ   = 1) for the same geometric shape so that users could feel the range of how the control of movement could be shared. The value of the stiffness factor ( k s ) was set to 0.5 and not changed during the session.
Each participant was asked the following seven questions (Table 1) about their experience related to the usability of the Cobot. These questions were asked during the later stages of a participant’s exploration with the device. Participants were asked to rate their feelings of usability on a scale of one to five, with a value of five indicating strongly agree. They were also asked to share any specific comments related to these questions.
After each participant finished using the Cobot, they were also asked to provide feedback on their perceived usefulness of the shared control concept for diagram exploration both on its own and in combination with tactile feedback.

4. Results

To validate the reliability of the mouse sensor data to represent the location with respect to the starting location, we tested the robot movement along a simple trajectory in various directions. Results are shown for the robot moving from the starting location to a 180 mm (7.1 inches) distance along the y-axis (i.e., [0,0] to [0,180]). The commanded trajectory is denoted by a solid (black) line in Figure 8. The results for three different trials of robot movement are shown (trial 1—dotted red line: [0,0] to [1.32,177.1]; trial 2—dotted green line: [0,0] to [0,184.59]; trial 3—dotted blue line: [0,0] to [1.45,179.5]).
For the user study, participants were observed in their initial interaction, the time it took to learn the system, and, after they learned the system, any problems or awkwardness in using the Cobot. All four participants quickly learnt how to use the Cobot and showed no signs of awkwardness or difficulty in using the device in the different shared control modes. Table 2 lists the rating scale answers provided by the four participants, P1, P2, P3, and P4, to the seven questions.
In terms of usability (question 1), all four participants stated that the device was small, maneuverable, and easy to use. Participant P4 mentioned that “the device would be more ergonomic (i.e., reduce fatigue during longer interaction times) if lowered in height”. This was consistent with observations by the study investigator of the interaction of the participants’ hands with the Cobot. It was observed that participants were not able to rest their arm on the desk while holding the top shell. Lowering the height of the top shell by approximately 1 inch would allow this to occur for all the participants.
In terms of question 2, all four participants further suggested that although the size was not a problem when using the device, it would be more portable if it was made smaller (easier to carry around).
For question 3, all participants liked the idea of being able to adjust the weighting (i.e., admittance value k τ ) of how control was shared. They had no particular opinion towards using a particular fixed value of k τ . Instead, all were of the view that being able to control the admittance value would provide them flexibility in interacting with the virtual diagram. For example, participant P4 suggested using stronger Cobotic guidance (smaller admittance value) when exploring newer diagrams or getting an overview of a diagram. On the other hand, they suggested having weaker Cobotic guidance to “break free” when they had general familiarity with the diagram and wanted to obtain more details. Participant P2 stated that “everyone is different, so the ability to control k τ is a good thing”. All four participants thought that using a simple control knob on the Cobot to adjust the weighting would be easiest to use.
When asked about question 4, all participants agreed that it would be easy for most people to learn how to use the device. Participant P1 further suggested that “providing audio instructions on how to use the device would be helpful”.
All participants were enthusiastic about the homing button (question 5). They believed that this would be particularly helpful during free exploration. For example, if someone wandered away from key areas during free exploration and felt lost, the homing button could be used to move the device back to the heart of the diagram.
All four participants were of the view that the device was easy to use (question 6) and thought that the use of the mouse cap to control the robot was helpful (question 7).
In terms of usefulness, all four participants liked the idea of using Cobotic assistance when exploring tactile diagrams and did not report any problems when interacting with the device. Participant P4 stated that “this concept may be very helpful and they look forward to see how this can be used in different applications”. Similarly, participant P3 stated that “they would love to use the device” and participant P2 mentioned that “use of tactile feedback would be great help”.

5. Discussion

This paper describes the development of a small, cost-effective Cobot tactile display that provides distributed tactile feedback to two fingers, as well as the ability to provide haptic guidance to BVI users exploring a tactile graphic. The new prototype is reduced in size compared to the previous prototype and able to achieve higher accuracy in a position relative to a start location. The current position accuracy is sufficient for exploring a tactile diagram haptically without vision. The size of the workspace is potentially limitless, although restricted by practical limitations of arm reach. The new prototype also successfully implements the control/guidance of the Cobot based on the sensed force applied by the user and the guidance/virtual fixtures represented virtually in the off-device computer. Semi-structured interviews involving four BVI participants suggested the device is easy to use and has the potential to improve access to tactile diagrams.
Previous problems with position accuracy were overcome by the development of a dead reckoning module combining information from an optical mouse sensor and an IMU. Accurate haptic spatial perception is crucial in the absence of vision for correct interpretation of shapes and spatial relations in tactile diagrams [19]. Unlike the system used by Rastogi and his colleagues [19], our current prototype does not suffer from errors due to rotation of the user’s hand.
The current prototype also successfully implements guidance/virtual fixtures in conjunction to the admittance control of the movement of the robot base with shared control with the user, which was not completed in the previous version of the device [16]. This included the measurement of planar force applied by the user to the Cobot device to indicate intention of movement. Although angular movements by participants holding a wrist manipulandum can have a relatively small standard deviation of 2 degrees in matching tasks under controlled conditions [29], several reasons suggest that the measured force vector does not need to be as accurate for our purposes: (1) the force measurement does not affect the location accuracy of the tactile feedback or virtual fixtures, (2) the accuracy of “movement intention” is unlikely to be as high as under controlled conditions, particularly as the main focus of the user will be on interpreting the tactile/haptic feedback as a function of position, (3) the shared control algorithm is sufficiently complicated that users will not be able to predict the results of their “movement intention” in a highly precise manner, and (4) participant feedback from our user study indicated that users did not experience any negative effects when exploring diagrams.
The preliminary feedback from potential BVI users further suggests that the addition of guidance/virtual fixtures and shared control has the potential to improve access to refreshable tactile graphics, especially for diagrams that are more complex. Previous work by Paneels and his colleagues [15] suggested that using guidance/virtual fixtures alone may be sufficient, although they only examined performance with simple line graphs and scatter plots. The advantage, if this is true, would be a simpler and cheaper device. A comparison between the use of tactile feedback + guidance/virtual fixtures and the use of guidance/virtual fixtures only is needed to determine if their results extend to more complicated diagrams.
Originally the use of shared control with haptic fixtures also suggested that further study would be needed to determine the optimum settings for use of the Cobot. However, participants suggested and had a preference to use a simple knob so that they could control the amount of shared control themselves while actively exploring a diagram. This is likely to have great utility as the amount of shared control desired is likely to shift during the exploration of a diagram as the user’s intention shifts, as mentioned by one of the participants. For example, when using physical tactile diagrams in classroom settings, teachers of the blind will typically guide the BVI student over the whole graphic for an overview and show them any diagram keys. Then the student might be expected to more independently explore the graphic [30]. However, these steps may differ if the student is already familiar with the graphic.

6. Limitations and Future Work

The limitation of this work is that it was unable to draw any conclusions about the performance of BVI users with this device in terms of their ability to comprehend diagrams, the time taken, and the cognitive load. This was because the focus was on the assessment of the basic functionality of the system, in terms of position accuracy and the implementation of the shared control algorithm, and on user assessment of the usability and usefulness of the device. This is an important first step to ensure any user studies will produce meaningful results. Future studies will compare the use of the system to the use of haptic fixtures and shared control only, to the use of tactile feedback only, and to the current standard physical diagrams. Participants will be assessed in their understanding of the diagrams, the time taken to explore the diagram, and the cognitive load involved.
A limitation of the overall project is that it only focuses on the display aspect of providing diagram information. Another important aspect is the conversion of visual diagrams to tactile diagrams, which is a time-consuming task. The conversion process also involves a great deal of simplification that requires an experienced professional to create. However, this component can and is being addressed separately.
Other limitations of this work are that the user study and preliminary semi-structured interviews were conducted with a small number of BVIs, only simple diagrams were used, and a limited number of shared controlled levels were experienced. However, the participants were a random sample of the target population and their responses indicated an ability to extrapolate to more complex diagrams and differing levels of shared control.

7. Conclusions

The developed Cobot tactile display prototype was able to accurately render tactile feedback and guidance fixtures for the exploration of tactile graphics by BVIs. The display achieved both sufficient position accuracy and a large workspace. Participant feedback in a user study suggested that the device would be more comfortable if it was lowered in height, so that users could rest their arms on the table, and also reduced in size. Participants also liked the use of a homing button, which was also consistent with advice to have a reference point when exploring tactile graphics [30]. Further research is needed to determine how best to integrate guidance/virtual fixtures with tactile pin display feedback, or whether either alone performs better. The shared control paradigm also has potential benefits for BVI student–teacher interactions and other collaborations in remote learning and working environments.

Author Contributions

Conceptualization, S.G. and D.T.V.P.; methodology, S.G. and D.T.V.P.; software, S.G.; investigation, S.G.; resources, D.T.V.P.; data curation, S.G.; writing—original draft preparation, S.G. and D.T.V.P.; writing—review and editing, S.G. and D.T.V.P.; supervision, D.T.V.P.; project administration, D.T.V.P.; funding acquisition, D.T.V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon work supported by the National Science Foundation under grant number CBET-16-5226.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Virginia Commonwealth University (protocol code HM20007890 and 8/16/2016).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Supporting data can be requested via email to the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vidal-Verdu, F.; Hafez, M. Graphical Tactile Displays for Visually-Impaired People. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 119–130. [Google Scholar] [CrossRef] [PubMed]
  2. Basdogan, C.; Giraud, F.; Levesque, V.; Choi, S. A Review of Surface Haptics: Enabling Tactile Effects on Touch Surfaces. IEEE Trans. Haptics 2020, 13, 450–470. [Google Scholar] [CrossRef]
  3. Kim, S.; Ryu, Y.; Cho, J.; Ryu, E.-S. Towards Tangible Vision for the Visually Impaired through 2D Multiarray Braille Display. Sensors 2019, 19, 5319. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Mukhiddinov, M.; Kim, S.-Y. A Systematic Literature Review on the Automatic Creation of Tactile Graphics for the Blind and Visually Impaired. Processes 2021, 9, 1726. [Google Scholar] [CrossRef]
  5. Hoshino, H. A Method to Represent an Arbitrary Surface in Encounter Type Shape Representation System. In Proceedings of the 7th IEEE International Workshop on Robot and Human Communication, Takamatsu, Japan, 30 September–2 October 1998; pp. 107–114. Available online: https://cir.nii.ac.jp/crid/1573950400262429824 (accessed on 4 May 2022).
  6. Tomita, H.; Agatsuma, S.; Wang, R.; Takahashi, S.; Saga, S.; Kajimoto, H. An Investigation of Figure Recognition with Electrostatic Tactile Display. In Proceedings of the International Conference on Human-Computer Interaction, Orlando, FL, USA, 26–31 July 2019; pp. 363–372. [Google Scholar]
  7. Wijntjes, M.W.A.; van Lienen, T.; Verstijnen, I.M.; Kappers, A.M.L. The Influence of Picture Size on Recognition and Exploratory Behaviour in Raised-Line Drawings. Perception 2008, 37, 602–614. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Owen, J.M.; Petro, J.A.; D’Souza, S.M.; Rastogi, R.; Pawluk, D.T.V. An Improved, Low-Cost Tactile’mouse’for Use by Individuals Who Are Blind and Visually Impaired. In Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, 25–28 October 2009; pp. 223–224. [Google Scholar]
  9. Petit, G.; Defresne, A.; Levesque, V.; Hayward, V.; Trudeau, N. Refreshable Tactile Graphics Applied to Schoolbook Illustrations for Students with Visual Impairment. In Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, Halifax, NS, Canada, 13–15 October 2008; pp. 89–96. [Google Scholar]
  10. Weisenberger, J. Changing the Haptic Field of View: Tradeoffs of Kinesthetic and Mechanoreceptive Spatial Information. In Proceedings of the World Haptics Conference, Tsukuba, Japan, 22–24 March 2007. [Google Scholar]
  11. Burch, D.; Pawluk, D.T.V. Using Multiple Contacts with Texture-Enhanced Graphics. In Proceedings of the 2011 IEEE World Haptics Conference, Istanbul, Turkey, 21–24 June 2011; pp. 287–292. [Google Scholar]
  12. Rastogi, R.; Pawluk, D.T.V. Dynamic Tactile Diagram Simplification on Refreshable Displays. Assist. Technol. 2013, 25, 31–38. [Google Scholar] [CrossRef] [PubMed]
  13. Crossan, A.; Brewster, S. Two-Handed Navigation in a Haptic Virtual Environment. In Proceedings of the Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 22–27 April 2006; pp. 676–681. [Google Scholar]
  14. Noble, N.; Martin, B. Shape Discovering Using Tactile Guidance. In Proceedings of the 6th International Conference EuroHaptics, Paris, France, 3–6 July 2006. [Google Scholar]
  15. Paneels, S.A.; Roberts, J.C.; Rodgers, P.J. Haptic Interaction Techniques for Exploring Chart Data. In Proceedings of the International Conference on Haptic and Audio Interaction Design, Dresden, Germany, 10–11 September 2009; pp. 31–40. [Google Scholar]
  16. Lazea, A.; Pawluk, D.T.V. Design and Testing of a Haptic-Feedback Active Mouse for Accessing Virtual Tactile Diagrams. In Proceedings of the RESNA, Arlington, VA, USA, 10–14 July 2016. [Google Scholar]
  17. Abbott, J.J.; Marayong, P.; Okamura, A.M. Haptic Virtual Fixtures for Robot-Assisted Manipulation. In Proceedings of the 12th International Symposium ISRR, San Francisco, CA, USA, 12–15 October 2005; Robotics Research. Springer: Berlin/Heidelberg, Germany, 2007; pp. 49–64. [Google Scholar] [CrossRef] [Green Version]
  18. Tavakoli, M.; Lopes, P.; Sgrigna, L.; Viegas, C. Motion control of an omnidirectional climbing robot based on dead reckoning method. Mechatronics 2015, 30, 94–106. [Google Scholar] [CrossRef]
  19. Rastogi, R.; Pawluk, D.T.; Ketchum, J. Issues of Using Tactile Mice by Individuals Who Are Blind and Visually Impaired. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 311–318. [Google Scholar] [CrossRef] [PubMed]
  20. Headley, P.C.; Pawluk, D.T.V. A Low-Cost, Variable-Amplitude Haptic Distributed Display for Persons Who Are Blind and Visually Impaired. In Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS’10, Orlando, FL, USA, 25–27 October 2010; ACM: New York, NY, USA, 2010; pp. 227–228. [Google Scholar]
  21. Bonarini, A.; Matteucci, M.; Restelli, M. Dead Reckoning for Mobile Robots using Two Optical Mice. In Proceedings of the First International Conference on Informatics in Control, Automation and Robotics, Setúbal, Portugal, 25–28 August 2004; pp. 87–94. [Google Scholar] [CrossRef] [Green Version]
  22. Liu, Y.; Ou, Y.S.; Han, W.C. Mobile Robot Localization Based on Optical Sensor. In Proceedings of the 2019 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2019, Irkutsk, Russia, 4–9 August 2019; pp. 874–879. [Google Scholar]
  23. Edman, P.K. Tactile Graphics; American Foundation for the Blind: Arlington, VA, USA, 1992. [Google Scholar]
  24. Palani, H.P.; Giudice, N.A. Principles for Designing Large-Format Refreshable Haptic Graphics Using Touchscreen Devices: An Evaluation of Nonvisual Panning Methods. ACM Trans. Access. Comput. 2017, 9, 1–25. [Google Scholar] [CrossRef]
  25. Burch, D. Development of a Multiple Contact Haptic Display with Texture-Enhanced Graphics. Ph.D. Thesis, Virginia Commonwealth University, Richmond, VA, USA, 2012. [Google Scholar]
  26. Pawluk, D.T.V.; Van Buskirk, C.P.; Killebrew, J.H.; Hsiao, S.S.; Johnson, K.O. Control and Pattern Specification for a High Density Tactile Array. In Proceedings of the ASME Dynamic Systems and Control Division, Anaheim, CA, USA, 15–20 November 1998; Volume 64, pp. 97–102. [Google Scholar]
  27. Tzafestas, S.G. Introduction to Mobile Robot Control; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  28. Lederman, S.J.; Klatzky, R.L. Hand movements: A window into haptic object recognition. Cogn. Psychol. 1987, 19, 342–368. [Google Scholar] [CrossRef]
  29. Marini, F.; Ferrantino, M.; Zenzeri, J. Proprioceptive identification of joint position versus kinaesthetic movement reproduction. Hum. Mov. Sci. 2018, 62, 1–13. [Google Scholar] [CrossRef] [PubMed]
  30. Hasty, L. Teaching Tactile Graphics. Available online: https://www.tactilegraphics.org/teachingtgs.html (accessed on 12 April 2022).
Figure 1. Block diagram illustrating the interaction between functional blocks of the system.
Figure 1. Block diagram illustrating the interaction between functional blocks of the system.
Sensors 22 04468 g001
Figure 2. System current prototype.
Figure 2. System current prototype.
Sensors 22 04468 g002
Figure 3. Diagram representing the robot block.
Figure 3. Diagram representing the robot block.
Sensors 22 04468 g003
Figure 4. Composition of the optical mouse sensor and an inertial measurement unit coupled underneath the robot base.
Figure 4. Composition of the optical mouse sensor and an inertial measurement unit coupled underneath the robot base.
Sensors 22 04468 g004
Figure 5. (a): Sensor cap block consisting of the top shell, force-sensing system, and tactile display; (b): MicroJoystick (left) and connection between the robot base, MicroJoystick, and top shell (right).
Figure 5. (a): Sensor cap block consisting of the top shell, force-sensing system, and tactile display; (b): MicroJoystick (left) and connection between the robot base, MicroJoystick, and top shell (right).
Sensors 22 04468 g005
Figure 6. (a): Shows an illustration of a sample virtual diagram (circle) and the calculation of the final force depending on the location of the mobile robot and applied force. Here, both the admittance value ( k τ ) and stiffness factor ( k s ) are set to 0.5. (b): Shows an illustration of a virtual object (circle) and the path followed by the mobile robot in response to a user’s input force. Here, the stiffness factor ( k s ) is set to a value of 0.5. The admittance value ( k τ ) is set to 0.25 for all locations (i.e., to promote movement along the desired direction) except for locations marked with the dotted square (free exploration, k τ = 1). (c): Shows an illustration of an equilateral triangle and the path followed by the mobile robot in response to a user’s input force. Use of homecoming button brings user back to the nearest point on the diagram. Blue circles show position of the robot w.r.t to the diagram.
Figure 6. (a): Shows an illustration of a sample virtual diagram (circle) and the calculation of the final force depending on the location of the mobile robot and applied force. Here, both the admittance value ( k τ ) and stiffness factor ( k s ) are set to 0.5. (b): Shows an illustration of a virtual object (circle) and the path followed by the mobile robot in response to a user’s input force. Here, the stiffness factor ( k s ) is set to a value of 0.5. The admittance value ( k τ ) is set to 0.25 for all locations (i.e., to promote movement along the desired direction) except for locations marked with the dotted square (free exploration, k τ = 1). (c): Shows an illustration of an equilateral triangle and the path followed by the mobile robot in response to a user’s input force. Use of homecoming button brings user back to the nearest point on the diagram. Blue circles show position of the robot w.r.t to the diagram.
Sensors 22 04468 g006
Figure 7. Two-dimensional geometric shapes. (a): illustration of a circle (diameter = 10.16 cm). (b): illustration of a square (length = 10.16 cm, width = 10.16 cm). (c): illustration of a rectangle (length = 10.16cm, width = 5.08 cm). (d): illustration of an equilateral triangle (side = 10.16 cm). (e): illustration of a star (height = 10.16 cm).
Figure 7. Two-dimensional geometric shapes. (a): illustration of a circle (diameter = 10.16 cm). (b): illustration of a square (length = 10.16 cm, width = 10.16 cm). (c): illustration of a rectangle (length = 10.16cm, width = 5.08 cm). (d): illustration of an equilateral triangle (side = 10.16 cm). (e): illustration of a star (height = 10.16 cm).
Sensors 22 04468 g007
Figure 8. Robot movement along a simple trajectory (solid black line indicates the ideal trajectory; dashed red, blue, and green lines indicate trajectories followed by the robot in three different trials).
Figure 8. Robot movement along a simple trajectory (solid black line indicates the ideal trajectory; dashed red, blue, and green lines indicate trajectories followed by the robot in three different trials).
Sensors 22 04468 g008
Table 1. List of questions related to usability of the Cobot.
Table 1. List of questions related to usability of the Cobot.
Question NumberQuestion
1The device was maneuverable
2The device was too big to use effectively
3The control knob was useful to control different admittance value settings k τ (i.e., comparative strength to which they and the Cobot would share control)
4Most people would learn to use this system easily
5The homing button was useful to move the device to the heart of the diagram
6The Cobot was easy to use
7Applying force to the mouse cap was useful to control the Cobot
Table 2. Table listing rating scale answers to questions listed in Table 1.
Table 2. Table listing rating scale answers to questions listed in Table 1.
Question NumberP1P2P3P4
13554
22122
35555
44555
53555
64455
74545
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gill, S.; Pawluk, D.T.V. Design of a “Cobot Tactile Display” for Accessing Virtual Diagrams by Blind and Visually Impaired Users. Sensors 2022, 22, 4468. https://doi.org/10.3390/s22124468

AMA Style

Gill S, Pawluk DTV. Design of a “Cobot Tactile Display” for Accessing Virtual Diagrams by Blind and Visually Impaired Users. Sensors. 2022; 22(12):4468. https://doi.org/10.3390/s22124468

Chicago/Turabian Style

Gill, Satinder, and Dianne T. V. Pawluk. 2022. "Design of a “Cobot Tactile Display” for Accessing Virtual Diagrams by Blind and Visually Impaired Users" Sensors 22, no. 12: 4468. https://doi.org/10.3390/s22124468

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop