Next Article in Journal
Sound of the Police—Virtual Reality Training for Police Communication for High-Stress Operations
Previous Article in Journal
A Wearable Bidirectional Human–Machine Interface: Merging Motion Capture and Vibrotactile Feedback in a Wireless Bracelet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What the Mind Can Comprehend from a Single Touch

1
TAUCHI Research Center, Tampere University, 33100 Tampere, Finland
2
Department of Information Design and Corporate Communication, Bentley University, Waltham, MA 02452, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Multimodal Technol. Interact. 2024, 8(6), 45; https://doi.org/10.3390/mti8060045
Submission received: 15 April 2024 / Revised: 17 May 2024 / Accepted: 20 May 2024 / Published: 28 May 2024

Abstract

:
This paper investigates the versatility of force feedback (FF) technology in enhancing user interfaces across a spectrum of applications. We delve into the human finger pad’s sensitivity to FF stimuli, which is critical to the development of intuitive and responsive controls in sectors such as medicine, where precision is paramount, and entertainment, where immersive experiences are sought. The study presents a case study in the automotive domain, where FF technology was implemented to simulate mechanical button presses, reducing the JND FF levels that were between 0.04 N and 0.054 N to the JND levels of 0.254 and 0.298 when using a linear force feedback scale and those that were 0.028 N and 0.033 N to the JND levels of 0.074 and 0.164 when using a logarithmic force scale. The results demonstrate the technology’s efficacy and potential for widespread adoption in various industries, underscoring its significance in the evolution of haptic feedback systems.

1. Introduction

In the fundamental paper “The Intelligent Hand” presented by Klatzky Lederman [1], the authors begin the presentation of the general theory of haptic apprehension with the following two statements: “Haptics is very poor at apprehending spatial-layout information in a two-dimensional plane”, and “Haptics is very good at learning about and recognizing three-dimensional objects”. We seek to understand our primary research question: Do these statements continue to be valid in regard to what is experienced through a single touch?
Let us imagine that the finger pad is a small window on the fascinating world of sensations. But before a human is able to comprehend the endless variety of sensations, they must learn to explore and properly interpret cues available at a single point of touch by linking this afferent flow to that of haptic imagination. When a person’s haptic imagination is at a mature enough level, it is no longer necessary to stimulate the finger pad to induce the sense, as it will be possible to do so directly in the person’s imagination.
It is well known that human finger pads have thousands of receptors (mechanoreceptors, nociceptors and thermoreceptors). However, there are no special organs (cells or formations) in the human skin specifically sensitive to vibration, acceleration, gravity or other physical parameters, including electrical current or magnetic fields, as one might erroneously believe. Most mechanoreceptors located inside the dermis can perceive only mechanical energy as pressure force and micro-displacements translated by surrounding dermal tissues to the nerve endings and special cells/corpuscles (Meissner, Merkel, Pacinian, Ruffini and Krause end bulb) aggregated into heterogeneous receptive fields (Figure 1) with multiple highly sensitive zones distributed within an area typically covering five to ten fingerprint ridges [2].
As such, we have not found related research on the comparative morphology or evolution of receptive fields on the finger pads of primates vs. humans. Since receptive fields are morphological formations [3], we can suppose that they emerged and evolved due to the optimization of afferent flow processing during continuous exploration of external objects. The exploration of objects relies on a limited number of available parsing signals or physical parameters, namely, the force applied during exploration. The exploration of the contact surface may happen when a sensitive surface of the finger pad comes into a direct contact with the external object by pushing against the contact location or when the contact surface of the object moves with respect to the finger pad. In these cases, the finger pad produces the initial force against the contact surface in the direction of the surface, thus generating repulsive force according to the surface or object physical properties (Figure 2). Exploration movements or finger pad relative displacements in the vicinity of the contact location might happen at least in two orthogonal directions (lateral and longitudinal) both tangential in respect of the applied normal force.
Let us consider in more detail the forces acting at the contact location. We could refer to a structural biomechanics presented by Wu and co-authors [4] and Gerling and Thomas [5], or anatomical details disclosed by Bolanowski, and Pawson [6], but what is more important in further considerations is the dynamic ratio of force components acting within the finger pad and in vicinity of the contact location (Figure 3).
The finger pad, or fingertip, is acting against a contact surface featuring embedded pressure sensors to detect parameters of the physical impact at the place of contact. Tools such as doppler scanners are able to detect the surface deformation and micro-displacements at the place of contact with high resolution and accuracy in micrometers and in the absence of direct contact [7].
Nevertheless, the elasticity of the fingertip experiences strain by squeezing the tissues against the bone of the distal phalanx and the nail [8]. And, as the authors rightly pointed out, “Encoding fingertip forces in afferents that terminate dorsally in the fingertip might be advantageous because fine-form features of the contacted surfaces would influence the afferent signals less than with afferents terminating in the finger pulp”. Thus, the resultant force applied to the contact surface is the vector sum of the applied force by the user at exploration minus the repulsive force exerted by the contact surface and the elastic finger pad’s response. The array of resultant forces distributed within vicinity of the finger pad contact location or fingertip contact point dynamically changes by generating a complex ensemble of afferents [9] of the nearest dermal receptive fields and tactile afferents, conveying information about contact forces in sync with the kinesthetic flow in accordance to exploration behavior [8].
Force discrimination ability is an important perceptual sense that can maintain high-precision dexterous manipulations due to the force feedback (FF) loop directly at the level of skin mechanoreceptors [2], as well as due to the proprioceptors embedded in the muscles, tendons and ligaments. FF is important for many types of surgical manipulations, especially in microsurgery when performing delicate manipulations with thin living structures, for instance, as in assisted reproductive technology. Forces applied in in vivo experiments with soft tissue and minimally invasive surgeries can vary between 0 and 2.5 N [10,11], including, in some neurosurgical tasks, forces smaller than 0.3 N [12].
Psychophysics quantitatively sets the relationship between the physical signals inducing the sensations and the cognitive processes shaping perception, i.e., between the parameters related to physical world exploration and psychological concepts. In psychophysics, just-noticeable difference (JND) is a quantitative measure that is defined as the relative smallest change ( Δ Φ ) in regard to the stimulus intensity ( Φ 0 ) inducing changes in stimulus perception [13]. JND usually has no unit of measurement (JND = Δ Φ /( Φ 0 )). According to Weber’s law, at least, at forces greater than 2 N, the human ability to perceive changes in a stimulus is proportional to the intensity of the stimulus, and it is a constant value known as Weber fraction, according to Gescheider [14]. However, the detection of stimuli at very low intensities does not adhere to Weber’s law [9]. Close to the absolute threshold, JND is significantly large and exhibits an inverse relationship with the intensity of the stimulus.
A group of researchers from University of Toronto [15] examined the human hand’s force perception abilities in subjects having different hand sensitivity by virtue of their vocational training, i.e., surgeons and non-surgeons, near the absolute perceptual threshold for the forces that do not follow Weber’s law (0.1 N, 0.3 N, 0.6 N and 0.8 N).
The equation presented below suggests that the difference threshold for low-intensity forces does not maintain a steady proportion of the reference force. Instead, this proportion is inherently dependent on the reference stimulus itself. Consequently, employing the model of JND as a basis for developing the force scaling function is a logical approach.
As JND = Δ Φ / ( Φ 0 ) , therefore Δ Φ = { a 3 / ( b 3 + Φ 0 ) } * Φ 0 or Δ Φ = f ( Φ 0 ) * Φ 0
A similar approach was also recently pursued by Botturi et al. [16]. The proposed force feedback scaling function is
f feedback = f s * Φ ( f s ) = f s ( 1 + JND )
f feedback = f s ( 1 + a / ( b + f s ) )
As explained by Botturi, f s is the force at the slave-environment interface, and f feedback is the force fed back to the master device ( f m ). The function ( f s ) indicates that the scaling factor is a function of the sensed force at the slave interface.

2. Research on Force Feedback Applications at JND Level

As demonstrated in [17] (Evreinov, 2005), the two-handed manipulation of analog buttons (by pressing them with the thumbs) can provide accurate target (12 × 12 pixels or 4 × 4 mm) acquisition within the working field of the screen (768 × 768 pixels 252 × 252 mm) when bimanual cursor pointing (on the X- and Y-axes accordingly) does not exceed a range ±2 mm of physical button displacements. This implies that FF as low as 0.15 N of the JND step in a range between 0.1 N and 1.5 N can provide only 10% of input accuracy based on FF and button displacements of ±0.2 mm. When combined with visual feedback, the system can achieve a resolution accuracy within ±5 pixels, or approximately ±1.3%. This means that by using additional feedback methods, the precision of using both hands to point the cursor can be improved to around 3.9%, which matches a force feedback intensity of 60 mN.
By exploring JND force perception during the pressing of a button in the range 0.5–2.5 N of the reference force, Doerrer and Werthschützky (2002) [18] revealed that on average, participants were able to perceive a sudden change in FF larger than 100 mN. However, in relation to the reference forces between 0.5 N and 2.5 N, the just-noticeable relative force differences were of 20% to 5% lower. Within the range of 1.5 N–2.5 N, the determined mean JND was lower by 7.5% and 5.5%. These results were consistent with the earlier-presented values of 5%–10% as stated by Tan et al. (1992) [19].
The study of the psychophysical perception of distributed pressure forces upon exploration by fingertips has long been of interest [7,19,20,21,22,23]. Rørvik and co-authors [24] investigated whether untrained people could locate and determine by palpation the shapes and hardness of irregularities rendered between two compliant layers by using the ferrogranular jamming principle.
Evreinov and Raisamo (2005) [25] investigated how untrained subjects were able to memorize and reproduce a week later a sequence of four dynamic patterns of distributed pressure profiles (Figure 1). The pressure levels varied within compression and dilatation phases from 10 N to 0.15 N, taking in total about 300 ms for each of four behavioral patterns of distributed pressure profiles. The study proved that untrained subjects were able to memorize the four dynamic patterns of self-sense profiles of the fingertip and reproduce them a week later with high accuracy.
In their PopTouch thin-film array of dynamically reconfigured physical buttons, Firouzeh [26] used hydraulically amplified self-healing electrostatic (HASEL) stretchable caps. Each button produced a 1.5 mm out-of-plane displacement and supported holding force up to 1.5 N before sudden snap-through, providing an instinctive “click” sensation akin to that of a pushbutton.
Recently, Shultz and Harrison [27] introduced a haptic display that utilizes a series of electro-osmotic micropumps, each dedicated to a single button. This device is capable of producing an out-of-plane displacement of up to 6 mm, and it can exert a force exceeding 1 N for a button cap with a diameter of 10 mm. However, the device’s stiffness, its 5 mm thickness and substantial energy demand (1–2 W/ cm 2 ) restrict its practical applications. Additionally, the device’s lack of transparency hinders its incorporation into touchscreen interfaces.
Using multi-layered dielectric elastomer (MLDE), Lee and colleagues [28] developed a 20 mm diameter, 1.5 mm thick haptic actuator that can generate 2 mm out-of-plane displacement at 250 mN holding force. While being effective in providing skin stimulation at direct contact, a higher holding force (around 1 N) is required to simulate actions over various physical buttons.
Rekimoto and co-authors (2003) [29] proposed to complement the pressure-based input with a capacitive sensor at the contact location. However, as stated earlier by Hinckley and Sinclair (1999) [30] regarding capacitive sensors that require zero activation force to trigger the contact, “they may be prone to accidental activation due to inadvertent contact”. Therefore, to differentiate inadvertent pressure changes from JND signals that must support precise dexterous control in some applications, other authors have studied the specific conditions of use in mobile devices and interaction techniques, when pressure variation can impact the interpretation of the perceived force signals.
For instance, Stewart et al. [31] shared findings from a preliminary investigation into accidental changes in grip pressure on mobile devices, observed under both stationary laboratory conditions and while walking, with significant pressure fluctuations noted in each scenario. The study utilized the FSR-402 sensor in experimental setups, observing pressure variations from 0.1 N to 3–10 N, a range supported by the sensor and deemed ergonomically suitable for fingertip pressure examinations. However, to make use of pressure inputs effectively, they suggested filtering out accidental variations by integrating pressure measurements with accelerometer data. Given that unintentional pressure changes typically fell between 0 and 0.6 N, setting thresholds beyond 0.6 N was recommended for reliable detection of deliberate pressure inputs.
Though Rekimoto and Schwesig (2006) [32] demonstrated only a 3-level pressure-based button (“not pressed”, “light-pressed” and “hard-pressed”), providing tactile feedback upon crossing these levels, the results of various studies of the pressure sense as an auxiliary input modality [17,31,33,34,35,36] have shown that users can distinguish and apply up to 10 pressure levels with high degrees of accuracy when navigating through different types of menus with visual and audio feedback.
Stewart and their team [35] conducted a series of experiments to understand fundamental aspects of pressure-based interaction. In particular, they tried to assess a single-sided input [25] against grasping pressure delivered through a two-sided interaction paradigm, studying the controllability of pressure by the fingertip at different levels for a greater period of time (e.g., for five seconds rather than a single pressure pattern). In (Evreinov 2005) [17], dwell time for target acquisition was used for an uninterrupted period of only 300 ms. The results [35,37] suggested that using a grasping motion is more effective than a one-sided input and rivals the efficiency of pressure input when applied to solid surfaces.
Liao and colleagues developed the Button Simulator [38], a device that can simulate the physical button action by employing any given force–displacement curve. The Button Simulator had a low average error offset of around 0.034 N, meaning that the simulated force was very close to the actual force expected. However, the simulated sensation was not perceived as realistic when the button was pressed too fast. Additional research demonstrated that the deformation of the skin caused by the tangential force on the fingertip can be expressed by using a spring–mass–damper model, indicating a linear relationship between deformation and force [39]. Research by Kaaresoja at Nokia focused on the implementation of vibrotactile feedback accompanied by visual and audio feedback to find latency thresholds required to make virtual button presses feel natural [40].
Recent studies have reinforced the value of force feedback systems across various applications, particularly in enhancing user experience and performance in high-precision tasks. For instance, the integration of force feedback in virtual reality for medical planning has shown promise in simulating surgical procedures like clamping in colorectal surgery, highlighting the system’s potential in medical training [41]. Moreover, a meta-analysis on robot-assisted surgery quantified the benefits of haptic feedback, revealing significant improvements in surgical accuracy and a reduction in the force applied by surgeons [42]. Another innovative approach involves a sensorless feedback system that aids surgeons in distinguishing among different tissue types during laparoscopic surgery, with a high success rate in tumor identification [43]. These studies show the effectiveness of force feedback technologies in providing realistic tactile sensations that can improve outcomes in medical settings.
These diverse approaches, ranging from tactile feedback mechanisms to advanced sensory stimulation technologies, collectively advance the field of virtual button design, offering innovative solutions that enhance user interaction in multimedia applications, especially in contexts where visual attention is limited, like driving. We hypothesized that the repulsive FF at the JND level can be used for simulating the sense of pushbutton bank switches.

3. Application of Repulsive Force Feedback at JND Level for Simulating Sense of Pushbutton Bank Switches—Case Study

We studied the use of the repulsive force feedback (FF) at the JND level when interacting with the touch-sensitive surface of a slider used for the TactoTek in-vehicle control panel. Participants were asked to explore and locate on the slider an assigned level of FF (one of seven force levels), which they felt through light touch and which had been exposed to the fingertip at the starting position, among adjacent segments separated by tactile slits and activating repulsive force that had been assigned randomly as six pressure distractors selected in a range of 20 mN–170 mN. The participants were able to identify accurately force levels that were between 40 mN and 54 mN for the linear scale and 28 mN and 33 mN for the logarithmic scale of repulsive forces. Independently, on the repulsive force scales, the values below this range were overestimated, while the values above this range were underestimated. The task is crucial to understanding the tactile discernment capability to differentiate force feedback upon exploration of touch-sensitive surfaces when the level of repulsive forces is less than 200 mN in the absence of other accompanying afferent information (visual and/or auditory). The results of studies on repulsive force modulation as an active response of the contact surface can be used to modify human perception upon exploration or interaction with a physical surface to create the illusion of surface irregularities, a specific relief or dynamic parameters (such as friction, slipperiness and flexibility) in virtual reality for the development of more realistic 3D models simulating processes, objects and environments
The number of physical buttons in vehicles has decreased in part as an effort to reduce a driver’s workload [44,45,46]. In parallel with this reduction, many conventional controls have been swapped out in favor of smooth surfaces with capacitive touch interfaces as part of bright, colorful, displays [47]. More and more functions are then added to these interfaces, culminating into numerous factors of distraction. This creates a paradoxical situation where technological and ergonomic advances may impair usability by distracting a driver’s attention away from the road. However, previous research has shown that using haptic displays to enhance user interaction with in-vehicle infotainment systems can reduce task completion time and distraction while driving [48].
Reliability and reduced costs are some of the main reasons why car makers have moved away from physical buttons [49,50]. That said, so far, interaction performance with touch displays has not reached the same level as that of similar physical interfaces, nor have they even been able to mimic the physical displacement forces that would have been applied to the finger [18,51,52,53,54].
Touch screen interfaces demand users to select and activate specific functions precisely, which requires continuous visual attention and distracts the driver from the road by increasing eyes-off-the-road time (EORT) [55]. In recent studies, researchers [56] have found that touchscreens represent a significant step back for auto design by worsening a driver’s performance. As these displays became more common in vehicles in the 2010s, It has been found that these systems bring with them a significantly increased crash risk [50].
We believe that the touch-sensitive surface is a valuable technology that offers many possibilities for innovation. In our work, we have transformed a capacitive touch control panel with a central touch-sensitive slider into a single actuated surface that simulates the behavior of seven pushbutton switches. This was achieved by implementing a powerful voice-coil actuator beneath the surface that has the ability to provide strong tactile feedback to a user (Figure 4).
The objective of this work is to evaluate the efficiency of a stiff slider surface by simulating the flexibility and the force–displacement curve of seven pushbutton switches that can be pressed and released. The main purpose is to use this simulated force feedback (FF) to render haptically enhanced virtual buttons.

3.1. Participants

Sixteen (nine males and seven females) participants (mean age = 35, SD = 10.02) took part in the test. They were healthy adults who did not report any cognitive or sensory impairment that could have potentially affected their ability to complete the tasks. Before participating in the study, each participant signed a written consent form. The study was conducted in accordance to the guidelines of the Declaration of Helsinki.

3.2. Apparatus and Stimuli

The conceptual illustration of the seven pushbutton switches displayed at the base of the Tactotek control panel is shown in Figure 4. A powerful voice-coil actuator was placed under the center of the control panel, with the panel itself being attached to a pivot joint. Repeated force measurements were made across the panel to ensure FF was applied equally at all points on the slider. The current design requires a minimal amount of components to be affixed to the panel without great modification to the original product and without the need of springs or complicated brackets to provide linear displacement in the direction orthogonal to the fingertip. The force feedback provided was easily tuned via an Arduino microcontroller [57], and the whole setup was calibrated in advance by using an FSR402 pressure sensor [58].
The voice-coil arrangement consisted of an internal diameter of 16 mm and an external diameter of 35 mm, with a height of 20 mm. Six N35 neodymium magnets were stacked and placed in the center of the coil. This stack was attached directly to the underside of the Tactotek panel. A 0.35 mm copper magnet wire with approximately 500 turns was used for the coil, resulting in a measured impedance of 12.5 ohms. A maximum voltage of 16.5 VDC (total current of 1.32 amps, 21.78 watts) was applied to the coil. Force feedback was adjusted as a percentage of the voltage level assigned for each of the seven slider zones, with a range between 0.022 N and 0.17 N, with the scale between FF zones being adjusted based on a linear or logarithmic voltage output. The range of forces was chosen due to their range of sensitivity in relation to the sense of touch [59,60]. Figure 5 displays a chart of the two scales.
The flowchart of the usability test software is shown in Figure 6. The collected responses and relevant parameters of signals and stimuli were recorded into log files. By starting the test on a desktop computer, the software application “Slider FF profile” creates an array of indexes of FF signals for each mode of haptic stimulation for the test. This array consists of the force feedback indexes i (Fi values) presented in Table 1. At first, a test force feedback signal is delivered at the location marked with a red circle in Figure 4. Seven force feedback signals, which include the test force feedback and six distractors, are presented randomly on the slider segments. The software application generates the array of the presented force feedback indexes and sends them to the Arduino microcontroller. The Arduino microcontroller assigns the FF values to the seven segments on the slider, based on the selected stimulation mode (linear or logarithmic). Fingertip location is monitored by an attached Neonode IR linear sensor [61] with a touch accuracy of 0.1 mm and a scanning frequency of 500 Hz.

3.3. Procedure

Participants were seated in a comfortable position with their dominant hand on the prototype and non-dominant hand close to a keyboard (Figure 4). The experiment lasted approximately 30–40 min and consisted of three sessions: one training session and two experimental sessions (logarithmic and linear presented randomly). The training session consisted of 20 trials. Each of the experimental sessions comprised 70 trials in which each FF value was presented ten times.
For each trial, a test sample was placed at a random segment position, with all other segments presenting distractors (see Table 1 for values). During the trial, the participant was asked to place their index fingertip at the starting location marked with tape (red circle in Figure 4). After they pressed a key on the keyboard with their non-dominant hand, the test FF sample was presented for 20 ms. The choice of 20 ms was based on pulse decay time [62] and to avoid possible interference with successive pulses felt at the fingertip while scanning the slider [63,64,65]. Once exposition time had passed, they were asked to explore the slider segments. The test sample and six distractors were presented in random segment positions. The participant’s task was to identify which of the seven segments contained the FF matching that of the test sample.
Participants could freely explore the slider in any direction with no time limits, as long as the start key was held down. Each time the person’s fingertip slid onto any segment, the indexed FF signal was presented again for 20 ms. When the participant believed that the presented FF matched the test sample, they released the start key to lock in their response and complete the trial.
Immediately after completing the tests (all 70 trials), the participants were given a NASA Task Load Index (TLX) questionnaire to fill out. NASA-TLX measures six key dimensions: mental demand, physical demand, temporal demand, performance, effort and frustration [45]. Each dimension is rated on a 100-point scale, with the combined scores providing an overall workload index. By utilizing NASA-TLX in our study, we aimed to capture a comprehensive understanding of the participants’ cognitive and physical experiences during the study, offering additional insights into the overall user experience and interaction with the system.

3.4. Results

This study gathered a range of data to assess the participants’ interaction with force feedback (FF) levels. All participants demonstrated a clear understanding of the task, successfully identifying the designated level of FF within a randomly assigned location on the slider, amidst varying adjacent FF levels, without needing interruptions or additional guidance. The task was designed to measure the participants’ ability to accurately locate specific FF levels on the slider. This task was crucial to understanding the tactile discernment capabilities of participants in differentiating among FF intensities. The use of various FF levels allowed for a comprehensive evaluation of the sensory response across a spectrum of force feedback stimuli.

3.4.1. Accuracy of FF Level Identification

To effectively analyze participants’ performance in locating the exact FF level on the slider segments, we employed a ratio-based approach. The ratio values for each FF level under both linear and logarithmic stimulation conditions were calculated. A ratio of 1 indicated precise identification of the presented FF level, whereas ratios below 1 signified underestimation and those above 1 overestimation of the FF level. As depicted in Figure 7, the mean accuracy and confidence intervals (CIs) at 95% for each FF level are presented, allowing for a clearer depiction of the participants’ performance. The use of this ratio metric was instrumental in quantifying the degree of accuracy in the participants’ responses.
A repeated measures ANOVA, with stimulation mode and FF level as factors, was conducted on the accuracy of responses. The results reveal significant effects for both the main factors voltage [F(1, 15) = 5.35, p < 0.05, η p 2 = 0.26] and level [F(6, 90) = 84.95, p < 0.001 , η p 2 = 0.85]. All levels displayed significant differences in the participants’ responses, with the exception of levels 3 and 4 and levels 6 and 7. This non-significance in certain levels might indicate a perceptual threshold among participants, suggesting an area for further investigation. Raincloud plots, as illustrated in Figure 8, provide a visual representation of the individual accuracy of responses and their distribution. In conjunction with Table 2, these plots elucidate that the logarithmic voltage condition resulted in a higher tendency for participants to overestimate the presented FF, especially at levels 1, 2 and 5. This overestimation under logarithmic conditions could imply a perceptual bias or a cognitive processing difference when encountering logarithmic scales.
One-sample t-tests provided further insights, indicating that performances at levels 3 and 4 with a linear scale and level 4 with a logarithmic scale were notably accurate. Conversely, for levels 1 and 2, in both types of stimulation modes, the ratios were significantly higher than 1, indicating a consistent overestimation of the voltage presented. This could suggest that lower FF levels are more challenging to perceive accurately, leading to a natural tendency to overestimate. On the other hand, levels 5, 6 and 7 were significantly lower than 1, denoting an underestimation of the presented FF level. This underestimation at higher levels might reflect sensory saturation or a limitation in distinguishing finer gradations of force at higher intensities.

3.4.2. Time Efficiency

Regarding the time taken to respond, there was no significant difference in the main factors of FF levels and voltage conditions. The participants generally spent a similar amount of time (between 6.9 and 8.4 s) across all FF levels, suggesting a consistent cognitive load required for discerning different FF levels, irrespective of FF intensity or complexity. Figure 9 visually represents the distribution of response times for each FF level and stimulation mode. The error bars indicate 95% confidence intervals, highlighting the variability in response times among participants. The similar response times across all FF levels reinforce the notion that the participants’ exploration and identification of FF levels was consistent across all conditions.

3.4.3. Subjective Workload Assessment

The analysis of the NASA-TLX data revealed insightful trends about the participants’ perceived workload during the task (Table 3). Scores below 10 on NASA-TLX are considered low, and scores between 10 and 29 indicate a medium workload. The average scores for each dimension suggest a generally moderate level of perceived workload. Mental demand was slightly above average at 12.31; physical demand was low, with an average of 8.94; and temporal demand was also low, at 7.44, indicating minimal time pressure felt by participants. Performance confidence was modest (average of 10.1), the task was perceived as mildly difficult (average of 11.8), and the participants generally did not feel frustrated (average of 7.8).
A Principal Component Analysis (PCA) on the NASA-TLX items, using oblique rotation with an eigenvalue above 1, further clarified the underlying structure of the workload dimensions. The PCA was supported by a significant Bartlett’s test of sphericity (p = 0.009) and a moderate Kaiser–Meyer–Olkin (KMO) measure of 0.63. As shown in Table 4, two primary components were extracted: The first (RC1) strongly correlated with physical demand (0.906), mental demand (0.793) and effort (0.784), suggesting an “Overall Task Load” factor that combines physical and mental demands with general effort. Temporal demand was also moderately loaded on this component. The second component (RC2) contrasted performance (negatively loaded) with frustration (positively loaded), reflecting an “Emotional Strain vs. Task Efficacy” factor, where increased frustration aligns with lower perceived performance.
Temporal demand’s presence in both components, along with its unique variance (0.287), indicates a more complex aspect, suggesting that time influences and is influenced by both the Overall Task Load and the emotional/performance dynamics. For example, high temporal demand might increase the overall workload (physical, mental and effort) and simultaneously affect frustration levels and perceived performance. The uniqueness aspects of time could be urgency, pacing or time-related stress, which are specific to the temporal demands in any task.
At the end of the session, the participants were asked to identify devices where they could see this technology being implemented. Many participants cited game pads and phones, with a significant number pointing out its suitability for an in-car panel. This preference was expected, as the panel being tested was clearly intended for in-car usage. Participants explained that the design of the panel was intuitively appropriate to minimize visual distractions. When discussing the potential advantages of integrating variable FF values in existing devices, they noted not only an improvement in ease of use but also potential benefits for blind users, as well as a more agreeable tactile feedback compared with harsh audio signals. Upon request for further comments, a few participants provided insightful feedback, particularly regarding the ergonomics of the slick surface of the device, which has the tendency to stick, compared with a smoother, matte finish surface. The impact of different materials on touch perception has been documented in previous work [7].

4. Discussion and Conclusions

In this paper, we investigated the repulsive force feedback (FF) at the JND level used for simulating the sense of pushbutton bank switches and the performance of participants interacting with a touch-sensitive slider. The participants were asked to locate and select different zones on the slider based on the FF levels they felt. The participants could identify accurately FF levels that were between 0.04 N and 0.054 N at the JND levels of 0.254 and 0.298 for the linear scale and those that were between 0.028 N and 0.033 N at the JND levels of 0.074 and 0.164 for the logarithmic scale. They overestimated values below this range and underestimated values above this range. The NASA-TLX scores indicate that the participants experienced a moderate level of workload during the task. While mental demand was slightly above average, physical demand and temporal demand were relatively low, suggesting that the task was more mentally taxing than physically demanding or time-pressured. The modest scores in performance and the mild effort level indicate that participants found the task somewhat challenging but not overly so. The low level of frustration further suggests that the task did not induce significant emotional strain.
Integrating these results, the system demonstrates a high level of practical efficacy. The accuracy in identifying FF levels, despite some perceptual challenges at specific levels, indicates robust sensory response capabilities. The consistent response times across varying FF intensities ensure reliable and predictable interaction, which is crucial to user satisfaction and system reliability. Moreover, the moderate workload reported by the participants suggests that the system is user-friendly and does not impose excessive cognitive or physical stress.
In the automotive industry, FF technology is being used to create more intuitive and safer control systems for drivers. For example, haptic feedback can improve the effectiveness of touch screens and controls, reducing driver distraction and increasing engagement [66]. The current trajectory in automotive design indicates a continued and widespread adoption of capacitive touch controls in consumer vehicles, a trend that is expected to persist well into the future [67]. Despite advancements, the reality of consumer-accessible Level IV autonomous vehicles, which would eliminate the need for an attentive driver, remains distant [68]. This underscores the critical need for ongoing research into making existing touch controls safer and more user-friendly for drivers. Our study contributes to this area by demonstrating the potential of force feedback across the Tactotek touch-sensitive slider’s seven distinct locations to enhance the user interface.
External factors, such as ambient noise, lighting and environmental stressors, can significantly impact the effectiveness of force feedback systems. Our experiment was conducted in a controlled laboratory environment, allowing the participants to concentrate solely on the task and the interface. However, real-world driving involves several distractions and environmental factors—traffic, weather conditions, radio noise, passengers, etc. Ambient noise, for instance, can distract users or mask the auditory feedback that often accompanies haptic interfaces, potentially reducing the user’s ability to respond to tactile cues. Inadequate lighting may strain the user’s vision, leading to fatigue and decreased interaction accuracy with the force feedback system. Moreover, environmental stressors, including temperature fluctuations and vibrations, could interfere the user’s sensory perception, thus affecting performance. To ensure optimal deployment of force feedback systems under real-world conditions, it is essential to consider these external factors. Guidelines could include recommendations for noise-dampening features, adjustable lighting and robust system designs that can withstand environmental variances.
To build upon our findings and address these real-world complexities, we propose several avenues for future research. One key area involves exploring various configurations of the touch-sensitive slider, particularly experimenting with different numbers of zones to identify an optimal setup that maximizes user experience and performance.
It is essential to consider the interplay between user comfort and control accuracy. Tactile experience is greatly influenced by the materials used; soft-touch materials often enhance comfort for prolonged use, while harder materials might be favored for their precise feedback capabilities. Investigating the impact of different overlay materials on the slider could offer insights into how these materials influence the perception and discrimination of FF levels. For instance, Teflon has shown potential in enhancing vibrotactile signals [7], suggesting that material choice could be a critical factor in design considerations.
The study’s exploration into the realm of FF technology could be significantly enriched by delving into additional applications, such as virtual reality (VR) and remote surgery. In VR, FF can transcend the visual and auditory immersion by introducing a tangible dimension, allowing users to "feel" the virtual environment, thus enhancing the realism and depth of the experience. Similarly, in the context of remote surgery, FF becomes a pivotal component, providing surgeons with the necessary tactile feedback that is otherwise lost in teleoperated procedures. This haptic information can be crucial to delicate surgical maneuvers, potentially increasing precision and reducing the risk of errors. By expanding the scope of research to include these applications, research could uncover new insights into the capabilities and limitations of FF technology.
These directions aim not only to refine touch interface technology for current vehicles but also to prepare for the evolving landscape of automotive user experience as we progress towards higher levels of vehicle autonomy.

Author Contributions

Conceptualization, P.C. and G.E.; software, P.C. and G.E.; validation, P.C., G.E. and M.Z.; formal analysis, P.C., G.E. and M.Z.; investigation, P.C.; resources, R.R.; data curation, G.E. and M.Z.; writing—original draft preparation, P.C.; writing—review and editing, G.E., M.Z. and R.R.; visualization, G.E. and M.Z.; supervision, G.E. and R.R.; project administration, R.R.; funding acquisition, R.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by project Adaptive Multimodal In-Car Interaction (AMICI), funded by Business Finland grant number [1316/31/2021].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Klatzky, R.L.; Lederman, S.J. The intelligent hand. In Psychology of Learning and Motivation; Elsevier: Amsterdam, The Netherlands, 1988; Volume 21, pp. 121–151. [Google Scholar]
  2. Jarocka, E.; Pruszynski, J.A.; Johansson, R.S. Human touch receptors are sensitive to spatial details on the scale of single fingerprint ridges. J. Neurosci. 2021, 41, 3622–3634. [Google Scholar] [CrossRef]
  3. Detorakis, G.I.; Rougier, N.P. Structure of receptive fields in a computational model of area 3b of primary sensory cortex. Front. Comput. Neurosci. 2014, 8, 76. [Google Scholar] [CrossRef]
  4. Wu, J.Z.; Dong, R.G.; Rakheja, S.; Schopper, A.; Smutz, W. A structural fingertip model for simulating of the biomechanics of tactile sensation. Med. Eng. Phys. 2004, 26, 165–175. [Google Scholar] [CrossRef] [PubMed]
  5. Gerling, G.J.; Thomas, G.W. The effect of fingertip microstructures on tactile edge perception. In Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics Conference, Pisa, Italy, 18–20 March 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 63–72. [Google Scholar]
  6. Bolanowski, S.J.; Pawson, L. Organization of Meissner corpuscles in the glabrous skin of monkey and cat. Somatosens. Mot. Res. 2003, 20, 223–231. [Google Scholar] [CrossRef] [PubMed]
  7. Coe, P.; Evreinov, G.; Raisamo, R. The Impact of Different Overlay Materials on the Tactile Detection of Virtual Straight Lines. Multim. Tech. Inter. 2023, 7, 35. [Google Scholar] [CrossRef]
  8. Birznieks, I.; Macefield, V.G.; Westling, G.; Johansson, R.S. Slowly adapting mechanoreceptors in the borders of the human fingernail encode fingertip forces. J. Neurosci. 2009, 29, 9370–9379. [Google Scholar] [CrossRef]
  9. Jones, L.A. Matching forces: Constant errors and differential thresholds. Perception 1989, 18, 681–687. [Google Scholar] [CrossRef]
  10. Baki, P.; Székely, G.; Kósa, G. Miniature tri-axial force sensor for feedback in minimally invasive surgery. In Proceedings of the 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Rome, Italy, 24–27 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 805–810. [Google Scholar]
  11. Peirs, J.; Clijnen, J.; Reynaerts, D.; Van Brussel, H.; Herijgers, P.; Corteville, B.; Boone, S. A micro optical force sensor for force feedback during minimally invasive robotic surgery. Sens. Actuators A Phys. 2004, 115, 447–455. [Google Scholar] [CrossRef]
  12. Gan, L.S.; Zareinia, K.; Lama, S.; Maddahi, Y.; Yang, F.W.; Sutherland, G.R. Quantification of forces during a neurosurgical procedure: A pilot study. World Neurosurg. 2015, 84, 537–548. [Google Scholar] [CrossRef]
  13. Ehrenstein, W.H.; Ehrenstein, A. Psychophysical methods. In Modern Techniques in Neuroscience Research; Springer: Berlin/Heidelberg, Germany, 1999; pp. 1211–1241. [Google Scholar]
  14. Gescheider, G.A. Psychophysics: The Fundamentals; Psychology Press: London, UK, 2013. [Google Scholar]
  15. Khabbaz, F.H.; Goldenberg, A.; Drake, J. Force discrimination ability of the human hand near absolute threshold for the design of force feedback systems in teleoperations. Presence Teleoperators Virtual Environ. 2016, 25, 47–60. [Google Scholar] [CrossRef]
  16. Botturi, D.; Galvan, S.; Vicentini, M.; Secchi, C. Perception-centric force scaling function for stable bilateral interaction. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 4051–4056. [Google Scholar]
  17. Evreinov, G. Bimanual Input and Patterns of User Behavior. In Proceedings of the 5th International Conference on Methods and Techniques in Behavioral Research, Wageningen, The Netherlands, 30 August–2 September 2005; pp. 571–572. [Google Scholar]
  18. Doerrer, C.; Werthschuetzky, R. Simulating push-buttons using a haptic display: Requirements on force resolution and force-displacement curve. In Proceedings of the EuroHaptics, Edinburgh, UK, 8–10 July 2002; pp. 41–46. [Google Scholar]
  19. Tan, H.Z.; Pang, X.D.; Durlach, N.I. Manual resolution of length, force, and compliance. Adv. Robot. 1992, 42, 13–18. [Google Scholar]
  20. Srinivasan, M.A.; LaMotte, R.H. Tactual discrimination of softness. J. Neurophysiol. 1995, 73, 88–101. [Google Scholar] [CrossRef] [PubMed]
  21. Lederman, S.J.; Klatzky, R.L. Extracting object properties through haptic exploration. Acta Psychol. 1993, 84, 29–40. [Google Scholar] [CrossRef] [PubMed]
  22. Lelevé, A.; McDaniel, T.; Rossa, C. Haptic training simulation. Front. Virtual Real. 2020, 1, 3. [Google Scholar] [CrossRef]
  23. Coe, P.; Evreinov, G.; Ziat, M.; Raisamo, R. A Universal Volumetric Haptic Actuation Platform. Multimodal Technol. Interact. 2023, 7, 99. [Google Scholar] [CrossRef]
  24. Rørvik, S.B.; Auflem, M.; Dybvik, H.; Steinert, M. Perception by palpation: Development and testing of a haptic ferrogranular jamming surface. Front. Robot. AI 2021, 8, 745234. [Google Scholar] [CrossRef] [PubMed]
  25. Evreinov, G.; Raisamo, R. One-directional position-sensitive force transducer based on EMFi. Sens. Actuators A Phys. 2005, 123, 204–209. [Google Scholar] [CrossRef]
  26. Firouzeh, A.; Mizutani, A.; Groten, J.; Zirkl, M.; Shea, H. PopTouch: A Submillimeter Thick Dynamically Reconfigured Haptic Interface with Pressable Buttons. Adv. Mater. 2023, 36, 2307636. [Google Scholar] [CrossRef]
  27. Shultz, C.; Harrison, C. Flat Panel Haptics: Embedded Electroosmotic Pumps for Scalable Shape Displays. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–16. [Google Scholar]
  28. Lee, D.Y.; Jeong, S.H.; Cohen, A.J.; Vogt, D.M.; Kollosche, M.; Lansberry, G.; Mengüç, Y.; Israr, A.; Clarke, D.R.; Wood, R.J. A wearable textile-embedded dielectric elastomer actuator haptic display. Soft Robot. 2022, 9, 1186–1197. [Google Scholar] [CrossRef]
  29. Rekimoto, J.; Ishizawa, T.; Schwesig, C.; Oba, H. PreSense: Interaction techniques for finger sensing input devices. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, Vancouver, BC, Canada, 2–5 November 2003; pp. 203–212. [Google Scholar]
  30. Hinckley, K.; Sinclair, M. Touch-sensing input devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Pittsburgh, PA, USA, 15–20 May 1999; pp. 223–230. [Google Scholar]
  31. Stewart, C.; Hoggan, E.; Haverinen, L.; Salamin, H.; Jacucci, G. An exploration of inadvertent variations in mobile pressure input. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, San Francisco, CA, USA, 21–24 September 2012; pp. 35–38. [Google Scholar]
  32. Rekimoto, J.; Schwesig, C. PreSenseII: Bi-directional touch and pressure sensing interactions with tactile feedback. In Proceedings of the CHI’06 Extended Abstracts on Human Factors in Computing Systems, Montréal, QC, Canada, 22–27 April 2006; pp. 1253–1258. [Google Scholar]
  33. Evreinov, G.; Evreinova, T.; Preusche, C.; Hulin, T.; Raisamo, R. Surgical Knot Tying Skills: Training the Novices in an Asymmetric Bimanual Task. In Proceedings of the SKILLS09-Enaction on SKILLS: The International Conference on Multimodal Interfaces for Skills Transfer, Bilbao Éxhibition Center, Barakaldo, Spain, 15–16 December 2009; pp. 187–188. [Google Scholar]
  34. Brewster, S.A.; Hughes, M. Pressure-based text entry for mobile devices. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, Bonn, Germany, 15–18 September 2009; pp. 1–4. [Google Scholar]
  35. Stewart, C.; Rohs, M.; Kratz, S.; Essl, G. Characteristics of pressure-based input for mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 801–810. [Google Scholar]
  36. Wilson, G.; Brewster, S.A.; Halvey, M.; Crossan, A.; Stewart, C. The effects of walking, feedback and control method on pressure-based interaction. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, Stockholm, Sweden, 30 August–2 September 2011; pp. 147–156. [Google Scholar]
  37. Essl, G.; Rohs, M.; Kratz, S. Squeezing the sandwich: A mobile pressure-sensitive two-sided multi-touch prototype. In Proceedings of the Demonstration at UIST’09, Victoria, BC, Canada, 4–7 October 2009. [Google Scholar]
  38. Liao, Y.C.; Kim, S.; Oulasvirta, A. One button to rule them all: Rendering arbitrary force-displacement curves. In Proceedings of the Adjunct: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, Germany, 14–17 October 2018; pp. 111–113. [Google Scholar]
  39. Sato, S.; Okamoto, S.; Matsuura, Y.; Yamada, Y. Wearable finger pad deformation sensor for tactile textures in frequency domain by using accelerometer on finger side. ROBOMECH J. 2017, 4, 19. [Google Scholar] [CrossRef]
  40. Kaaresoja, T.; Brewster, S.; Lantz, V. Towards the temporally perfect virtual button: Touch-feedback simultaneity and perceived quality in mobile touchscreen press interactions. Acm Trans. Appl. Percept. 2014, 11, 1–25. [Google Scholar] [CrossRef]
  41. Bici, M.; Guachi, R.; Bini, F.; Mani, S.F.; Campana, F.; Marinozzi, F. Endo-Surgical Force Feedback System Design for Virtual Reality Applications in Medical Planning. Int. J. Interact. Des. Manuf. 2024, 1–9. [Google Scholar] [CrossRef]
  42. Bergholz, M.; Ferle, M.; Weber, B.M. The benefits of haptic feedback in robot assisted surgery and their moderators: A meta-analysis. Sci. Rep. 2023, 13, 19215. [Google Scholar] [CrossRef] [PubMed]
  43. Zhao, B.; Nelson, C.A. A sensorless force-feedback system for robot-assisted laparoscopic surgery. Comput. Assist. Surg. 2019, 24, 36–43. [Google Scholar] [CrossRef] [PubMed]
  44. Kim, H.; Kwon, S.; Heo, J.; Lee, H.; Chung, M.K. The effect of touch-key size on the usability of In-Vehicle Information Systems and driving safety during simulated driving. Appl. Ergon. 2014, 45, 379–388. [Google Scholar] [CrossRef] [PubMed]
  45. Holzinger, A. Finger instead of mouse: Touch screens as a means of enhancing universal access. In Proceedings of the ERCIM Workshop on UI for All, Paris, France, 23–25 October 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 387–397. [Google Scholar]
  46. Scott, B.; Conzola, V. Designing touch screen numeric keypads: Effects of finger size, key size, and key spacing. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, USA, 1 October 1997; SAGE Publications: Los Angeles, CA, USA, 1997; Volume 41, pp. 360–364. [Google Scholar]
  47. Liu, J.; Yang, S.; Ye, Z. Automotive display trend and tianma’s directions. In Proceedings of the 2019 26th International Workshop on Active-Matrix Flatpanel Displays and Devices (AM-FPD), Kyoto, Japan, 2–5 July 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 26, pp. 1–4. [Google Scholar]
  48. Frediani, G.; Carpi, F. Tactile display of softness on fingertip. Sci. Rep. 2020, 10, 20491. [Google Scholar] [CrossRef] [PubMed]
  49. Landymore, F. Death to Car Touchscreens, Buttons Are Back, Baby! 2023. Available online: https://futurism.com/the-byte/car-touchscreens-buttons-back (accessed on 14 April 2024).
  50. Zipper, D. The Glorious Return of an Old-School Car Feature. 2023. Available online: https://slate.com/business/2023/04/cars-buttons-touch-screens-vw-porsche-nissan-hyundai.html (accessed on 14 April 2024).
  51. Suh, Y.; Ferris, T.K. On-road evaluation of in-vehicle interface characteristics and their effects on performance of visual detection on the road and manual entry. Hum. Factors 2019, 61, 105–118. [Google Scholar] [CrossRef] [PubMed]
  52. Tao, D.; Yuan, J.; Liu, S.; Qu, X. Effects of button design characteristics on performance and perceptions of touchscreen use. Int. J. Ind. Ergon. 2018, 64, 59–68. [Google Scholar] [CrossRef]
  53. Jung, E.S.; Im, Y. Touchable area: An empirical study on design approach considering perception size and touch input behavior. Int. J. Ind. Ergon. 2015, 49, 21–30. [Google Scholar] [CrossRef]
  54. Chen, H.Y.; Park, J.; Dai, S.; Tan, H.Z. Design and evaluation of identifiable key-click signals for mobile devices. IEEE Transac. Haptics 2011, 4, 229–241. [Google Scholar] [CrossRef]
  55. National Highway Traffic Safety Administration. Visual-Manual NHTSA Driver Distraction Guidelines for in-Vehicle Electronic Devices; Docket No. NHTSA-2014-0088; National Highway Traffic Safety Administration: Washington, DC, USA, 2014. [Google Scholar]
  56. Vikström, F.D. Physical buttons outperform touchscreens in new cars, test finds. Vi Bilägare, 20 January 2022. [Google Scholar]
  57. Nayyar, A.; Puri, V. A review of Arduino board’s, Lilypad’s & Arduino shields. In Proceedings of the 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom); New Delhi, India, 16–18 March 2016, IEEE: Piscataway, NJ, USA, 2016; pp. 1485–1492. [Google Scholar]
  58. Baumann, P. Force Sensors. In Selected Sensor Circuits: From Data Sheet to Simulation; Springer: Berlin/Heidelberg, Germany, 2022; pp. 103–130. [Google Scholar]
  59. Dyck, P.J.; Schultz, P.W.; O’brien, P.C. Quantitation of touch-pressure sensation. Arch. Neurol. 1972, 26, 465–473. [Google Scholar] [CrossRef] [PubMed]
  60. Lindblom, U. Touch perception threshold in human glabrous skin in terms of displacement amplitude on stimulation with single mechanical pulses. Brain Res. 1974, 82, 205–210. [Google Scholar] [CrossRef]
  61. Neonode. Specifications Summary. Available online: https://support.neonode.com/docs/display/AIRTSUsersGuide/Specifications+Summary (accessed on 14 April 2024).
  62. Shao, Y.; Hayward, V.; Visell, Y. Spatial patterns of cutaneous vibration during whole-hand haptic interactions. Proc. Natl. Acad. Sci. USA 2016, 113, 4188–4193. [Google Scholar] [CrossRef] [PubMed]
  63. Enferad, E.; Giraud-Audine, C.; Giraud, F.; Amberg, M.; Semail, B.L. Generating controlled localized stimulations on haptic displays by modal superimposition. J. Sound Vib. 2019, 449, 196–213. [Google Scholar] [CrossRef]
  64. Pantera, L.; Hudin, C. Sparse actuator array combined with inverse filter for multitouch vibrotactile stimulation. In Proceedings of the 2019 IEEE World Haptics Conference (WHC), Tokyo, Japan, 9–12 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 19–24. [Google Scholar]
  65. Hudin, C.; Lozada, J.; Hayward, V. Localized tactile feedback on a transparent surface through time-reversal wave focusing. IEEE Trans. Haptics 2015, 8, 188–198. [Google Scholar] [CrossRef]
  66. Banter, B. Touch screens and touch surfaces are enriched by haptic force-feedback. Inf. Disp. 2010, 26, 26–30. [Google Scholar] [CrossRef]
  67. Roth, C. Design of the In-Vehicle Experience; Technical Report, SAE Tech. Paper; SAE International: Warrendale, PA, USA, 2022. [Google Scholar]
  68. Shladover, S.E. The truth about “self-driving” cars. SCI AM, 1 December 2016; Volume 314. [Google Scholar]
Figure 1. A schematic imaging of the fingerprint area, with skin ridges (a) and receptive fields (b) of sensory neurons (adapted from Jarocka et al. [2]).
Figure 1. A schematic imaging of the fingerprint area, with skin ridges (a) and receptive fields (b) of sensory neurons (adapted from Jarocka et al. [2]).
Mti 08 00045 g001
Figure 2. The force accompanying the finger pad interactions with a contact surface.
Figure 2. The force accompanying the finger pad interactions with a contact surface.
Mti 08 00045 g002
Figure 3. Forces exerted by the finger pad and experienced by the user at the point of contact.
Figure 3. Forces exerted by the finger pad and experienced by the user at the point of contact.
Mti 08 00045 g003
Figure 4. A schematic illustration of the project concept. F is the force index.
Figure 4. A schematic illustration of the project concept. F is the force index.
Mti 08 00045 g004
Figure 5. (Left): Force feedback levels for linear and logarithmic modes (newtons). (Right): Voltage-to-force feedback curve.
Figure 5. (Left): Force feedback levels for linear and logarithmic modes (newtons). (Right): Voltage-to-force feedback curve.
Mti 08 00045 g005
Figure 6. (Left): A flowchart of the usability test software for interaction with the force feedback enhanced slider. (Right): Participant’s exploration.
Figure 6. (Left): A flowchart of the usability test software for interaction with the force feedback enhanced slider. (Right): Participant’s exploration.
Mti 08 00045 g006
Figure 7. Accuracy for FF levels (1 through 7) under both voltage conditions (linear and logarithmic). Error bars represent confidence intervals (CIs) at 95%. The dashed line indicates perfect accuracy (ratio of 1).
Figure 7. Accuracy for FF levels (1 through 7) under both voltage conditions (linear and logarithmic). Error bars represent confidence intervals (CIs) at 95%. The dashed line indicates perfect accuracy (ratio of 1).
Mti 08 00045 g007
Figure 8. Raincloud plots showing individual performance for each FF level under linear (green) and logarithmic (orange) stimulation modes.
Figure 8. Raincloud plots showing individual performance for each FF level under linear (green) and logarithmic (orange) stimulation modes.
Mti 08 00045 g008
Figure 9. Exploration time per FF level and voltage type. Error bars represent 95% confidence intervals.
Figure 9. Exploration time per FF level and voltage type. Error bars represent 95% confidence intervals.
Mti 08 00045 g009
Table 1. FF values (in N) related to each level index.
Table 1. FF values (in N) related to each level index.
FF Level Index1234567
FF linear, N0.0220.0310.0400.0540.0850.1350.173
JND linear 0.3400.2540.2980.4460.4550.247
FF logarithmic, N0.0220.0260.0280.0330.0430.0640.173
JND logarithmic 0.1670.0740.1640.2630.3930.920
Table 2. Mean accuracy and standard deviation for FF levels under different type of stimulation modes.
Table 2. Mean accuracy and standard deviation for FF levels under different type of stimulation modes.
FF LevelMean (Linear)SD (Linear)Mean (Log)SD (Log)
11.1870.1261.2390.076
21.1160.1141.1470.06
31.0310.0931.0780.105
41.0160.0881.0210.074
50.9510.0750.9470.07
60.8960.0840.9040.064
70.8440.0630.890.048
Table 3. NASA-TLX descriptive statistics: mean, SD, and CIs.
Table 3. NASA-TLX descriptive statistics: mean, SD, and CIs.
Mental DemandPhysical DemandTemporal DemandPerformanceEffortFrustration
Mean 12.313 8.938 7.438 10.063 11.750 7.750
95% CI mean upper 14.957 12.167 9.796 13.191 14.526 10.914
95% CI mean lower 9.668 5.708 5.079 6.934 8.974 4.586
Std. deviation 4.963 6.060 4.427 5.870 5.209 5.939
Table 4. Component loadings of NASA-TLX.
Table 4. Component loadings of NASA-TLX.
RC1RC2Uniqueness
Physical demand 0.906 0.169
Mental demand 0.793 0.393
Effort 0.784 0.399
Temporal demand 0.583 0.512 0.287
Performance 0.892 0.193
Frustration 0.825 0.195
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Coe, P.; Evreinov, G.; Ziat, M.; Raisamo, R. What the Mind Can Comprehend from a Single Touch. Multimodal Technol. Interact. 2024, 8, 45. https://doi.org/10.3390/mti8060045

AMA Style

Coe P, Evreinov G, Ziat M, Raisamo R. What the Mind Can Comprehend from a Single Touch. Multimodal Technologies and Interaction. 2024; 8(6):45. https://doi.org/10.3390/mti8060045

Chicago/Turabian Style

Coe, Patrick, Grigori Evreinov, Mounia Ziat, and Roope Raisamo. 2024. "What the Mind Can Comprehend from a Single Touch" Multimodal Technologies and Interaction 8, no. 6: 45. https://doi.org/10.3390/mti8060045

Article Metrics

Back to TopTop