1. Introduction
In the fundamental paper “The Intelligent Hand” presented by Klatzky Lederman [
1], the authors begin the presentation of the general theory of haptic apprehension with the following two statements: “Haptics is very poor at apprehending spatial-layout information in a two-dimensional plane”, and “Haptics is very good at learning about and recognizing three-dimensional objects”. We seek to understand our primary research question: Do these statements continue to be valid in regard to what is experienced through a single touch?
Let us imagine that the finger pad is a small window on the fascinating world of sensations. But before a human is able to comprehend the endless variety of sensations, they must learn to explore and properly interpret cues available at a single point of touch by linking this afferent flow to that of haptic imagination. When a person’s haptic imagination is at a mature enough level, it is no longer necessary to stimulate the finger pad to induce the sense, as it will be possible to do so directly in the person’s imagination.
It is well known that human finger pads have thousands of receptors (mechanoreceptors, nociceptors and thermoreceptors). However, there are no special organs (cells or formations) in the human skin specifically sensitive to vibration, acceleration, gravity or other physical parameters, including electrical current or magnetic fields, as one might erroneously believe. Most mechanoreceptors located inside the dermis can perceive only mechanical energy as pressure force and micro-displacements translated by surrounding dermal tissues to the nerve endings and special cells/corpuscles (Meissner, Merkel, Pacinian, Ruffini and Krause end bulb) aggregated into heterogeneous receptive fields (
Figure 1) with multiple highly sensitive zones distributed within an area typically covering five to ten fingerprint ridges [
2].
As such, we have not found related research on the comparative morphology or evolution of receptive fields on the finger pads of primates vs. humans. Since receptive fields are morphological formations [
3], we can suppose that they emerged and evolved due to the optimization of afferent flow processing during continuous exploration of external objects. The exploration of objects relies on a limited number of available parsing signals or physical parameters, namely, the force applied during exploration. The exploration of the contact surface may happen when a sensitive surface of the finger pad comes into a direct contact with the external object by pushing against the contact location or when the contact surface of the object moves with respect to the finger pad. In these cases, the finger pad produces the initial force against the contact surface in the direction of the surface, thus generating repulsive force according to the surface or object physical properties (
Figure 2). Exploration movements or finger pad relative displacements in the vicinity of the contact location might happen at least in two orthogonal directions (lateral and longitudinal) both tangential in respect of the applied normal force.
Let us consider in more detail the forces acting at the contact location. We could refer to a structural biomechanics presented by Wu and co-authors [
4] and Gerling and Thomas [
5], or anatomical details disclosed by Bolanowski, and Pawson [
6], but what is more important in further considerations is the dynamic ratio of force components acting within the finger pad and in vicinity of the contact location (
Figure 3).
The finger pad, or fingertip, is acting against a contact surface featuring embedded pressure sensors to detect parameters of the physical impact at the place of contact. Tools such as doppler scanners are able to detect the surface deformation and micro-displacements at the place of contact with high resolution and accuracy in micrometers and in the absence of direct contact [
7].
Nevertheless, the elasticity of the fingertip experiences strain by squeezing the tissues against the bone of the distal phalanx and the nail [
8]. And, as the authors rightly pointed out, “Encoding fingertip forces in afferents that terminate dorsally in the fingertip might be advantageous because fine-form features of the contacted surfaces would influence the afferent signals less than with afferents terminating in the finger pulp”. Thus, the resultant force applied to the contact surface is the vector sum of the applied force by the user at exploration minus the repulsive force exerted by the contact surface and the elastic finger pad’s response. The array of resultant forces distributed within vicinity of the finger pad contact location or fingertip contact point dynamically changes by generating a complex ensemble of afferents [
9] of the nearest dermal receptive fields and tactile afferents, conveying information about contact forces in sync with the kinesthetic flow in accordance to exploration behavior [
8].
Force discrimination ability is an important perceptual sense that can maintain high-precision dexterous manipulations due to the force feedback (FF) loop directly at the level of skin mechanoreceptors [
2], as well as due to the proprioceptors embedded in the muscles, tendons and ligaments. FF is important for many types of surgical manipulations, especially in microsurgery when performing delicate manipulations with thin living structures, for instance, as in assisted reproductive technology. Forces applied in in vivo experiments with soft tissue and minimally invasive surgeries can vary between 0 and 2.5 N [
10,
11], including, in some neurosurgical tasks, forces smaller than 0.3 N [
12].
Psychophysics quantitatively sets the relationship between the physical signals inducing the sensations and the cognitive processes shaping perception, i.e., between the parameters related to physical world exploration and psychological concepts. In psychophysics, just-noticeable difference (
JND) is a quantitative measure that is defined as the relative smallest change (
) in regard to the stimulus intensity (
) inducing changes in stimulus perception [
13].
JND usually has no unit of measurement (
JND =
/(
)). According to Weber’s law, at least, at forces greater than 2 N, the human ability to perceive changes in a stimulus is proportional to the intensity of the stimulus, and it is a constant value known as Weber fraction, according to Gescheider [
14]. However, the detection of stimuli at very low intensities does not adhere to Weber’s law [
9]. Close to the absolute threshold,
JND is significantly large and exhibits an inverse relationship with the intensity of the stimulus.
A group of researchers from University of Toronto [
15] examined the human hand’s force perception abilities in subjects having different hand sensitivity by virtue of their vocational training, i.e., surgeons and non-surgeons, near the absolute perceptual threshold for the forces that do not follow Weber’s law (0.1 N, 0.3 N, 0.6 N and 0.8 N).
The equation presented below suggests that the difference threshold for low-intensity forces does not maintain a steady proportion of the reference force. Instead, this proportion is inherently dependent on the reference stimulus itself. Consequently, employing the model of
JND as a basis for developing the force scaling function is a logical approach.
A similar approach was also recently pursued by Botturi et al. [
16]. The proposed force feedback scaling function is
As explained by Botturi, is the force at the slave-environment interface, and is the force fed back to the master device (). The function () indicates that the scaling factor is a function of the sensed force at the slave interface.
2. Research on Force Feedback Applications at JND Level
As demonstrated in [
17] (Evreinov, 2005), the two-handed manipulation of analog buttons (by pressing them with the thumbs) can provide accurate target (12 × 12 pixels or 4 × 4 mm) acquisition within the working field of the screen (768 × 768 pixels 252 × 252 mm) when bimanual cursor pointing (on the X- and Y-axes accordingly) does not exceed a range ±2 mm of physical button displacements. This implies that FF as low as 0.15 N of the
JND step in a range between 0.1 N and 1.5 N can provide only 10% of input accuracy based on FF and button displacements of ±0.2 mm. When combined with visual feedback, the system can achieve a resolution accuracy within ±5 pixels, or approximately ±1.3%. This means that by using additional feedback methods, the precision of using both hands to point the cursor can be improved to around 3.9%, which matches a force feedback intensity of 60 mN.
By exploring
JND force perception during the pressing of a button in the range 0.5–2.5 N of the reference force, Doerrer and Werthschützky (2002) [
18] revealed that on average, participants were able to perceive a sudden change in FF larger than 100 mN. However, in relation to the reference forces between 0.5 N and 2.5 N, the just-noticeable relative force differences were of 20% to 5% lower. Within the range of 1.5 N–2.5 N, the determined mean
JND was lower by 7.5% and 5.5%. These results were consistent with the earlier-presented values of 5%–10% as stated by Tan et al. (1992) [
19].
The study of the psychophysical perception of distributed pressure forces upon exploration by fingertips has long been of interest [
7,
19,
20,
21,
22,
23]. Rørvik and co-authors [
24] investigated whether untrained people could locate and determine by palpation the shapes and hardness of irregularities rendered between two compliant layers by using the ferrogranular jamming principle.
Evreinov and Raisamo (2005) [
25] investigated how untrained subjects were able to memorize and reproduce a week later a sequence of four dynamic patterns of distributed pressure profiles (
Figure 1). The pressure levels varied within compression and dilatation phases from 10 N to 0.15 N, taking in total about 300 ms for each of four behavioral patterns of distributed pressure profiles. The study proved that untrained subjects were able to memorize the four dynamic patterns of self-sense profiles of the fingertip and reproduce them a week later with high accuracy.
In their PopTouch thin-film array of dynamically reconfigured physical buttons, Firouzeh [
26] used hydraulically amplified self-healing electrostatic (HASEL) stretchable caps. Each button produced a 1.5 mm out-of-plane displacement and supported holding force up to 1.5 N before sudden snap-through, providing an instinctive “click” sensation akin to that of a pushbutton.
Recently, Shultz and Harrison [
27] introduced a haptic display that utilizes a series of electro-osmotic micropumps, each dedicated to a single button. This device is capable of producing an out-of-plane displacement of up to 6 mm, and it can exert a force exceeding 1 N for a button cap with a diameter of 10 mm. However, the device’s stiffness, its 5 mm thickness and substantial energy demand (1–2 W/
) restrict its practical applications. Additionally, the device’s lack of transparency hinders its incorporation into touchscreen interfaces.
Using multi-layered dielectric elastomer (MLDE), Lee and colleagues [
28] developed a 20 mm diameter, 1.5 mm thick haptic actuator that can generate 2 mm out-of-plane displacement at 250 mN holding force. While being effective in providing skin stimulation at direct contact, a higher holding force (around 1 N) is required to simulate actions over various physical buttons.
Rekimoto and co-authors (2003) [
29] proposed to complement the pressure-based input with a capacitive sensor at the contact location. However, as stated earlier by Hinckley and Sinclair (1999) [
30] regarding capacitive sensors that require zero activation force to trigger the contact, “they may be prone to accidental activation due to inadvertent contact”. Therefore, to differentiate inadvertent pressure changes from
JND signals that must support precise dexterous control in some applications, other authors have studied the specific conditions of use in mobile devices and interaction techniques, when pressure variation can impact the interpretation of the perceived force signals.
For instance, Stewart et al. [
31] shared findings from a preliminary investigation into accidental changes in grip pressure on mobile devices, observed under both stationary laboratory conditions and while walking, with significant pressure fluctuations noted in each scenario. The study utilized the FSR-402 sensor in experimental setups, observing pressure variations from 0.1 N to 3–10 N, a range supported by the sensor and deemed ergonomically suitable for fingertip pressure examinations. However, to make use of pressure inputs effectively, they suggested filtering out accidental variations by integrating pressure measurements with accelerometer data. Given that unintentional pressure changes typically fell between 0 and 0.6 N, setting thresholds beyond 0.6 N was recommended for reliable detection of deliberate pressure inputs.
Though Rekimoto and Schwesig (2006) [
32] demonstrated only a 3-level pressure-based button (“not pressed”, “light-pressed” and “hard-pressed”), providing tactile feedback upon crossing these levels, the results of various studies of the pressure sense as an auxiliary input modality [
17,
31,
33,
34,
35,
36] have shown that users can distinguish and apply up to 10 pressure levels with high degrees of accuracy when navigating through different types of menus with visual and audio feedback.
Stewart and their team [
35] conducted a series of experiments to understand fundamental aspects of pressure-based interaction. In particular, they tried to assess a single-sided input [
25] against grasping pressure delivered through a two-sided interaction paradigm, studying the controllability of pressure by the fingertip at different levels for a greater period of time (e.g., for five seconds rather than a single pressure pattern). In (Evreinov 2005) [
17], dwell time for target acquisition was used for an uninterrupted period of only 300 ms. The results [
35,
37] suggested that using a grasping motion is more effective than a one-sided input and rivals the efficiency of pressure input when applied to solid surfaces.
Liao and colleagues developed the Button Simulator [
38], a device that can simulate the physical button action by employing any given force–displacement curve. The Button Simulator had a low average error offset of around 0.034 N, meaning that the simulated force was very close to the actual force expected. However, the simulated sensation was not perceived as realistic when the button was pressed too fast. Additional research demonstrated that the deformation of the skin caused by the tangential force on the fingertip can be expressed by using a spring–mass–damper model, indicating a linear relationship between deformation and force [
39]. Research by Kaaresoja at Nokia focused on the implementation of vibrotactile feedback accompanied by visual and audio feedback to find latency thresholds required to make virtual button presses feel natural [
40].
Recent studies have reinforced the value of force feedback systems across various applications, particularly in enhancing user experience and performance in high-precision tasks. For instance, the integration of force feedback in virtual reality for medical planning has shown promise in simulating surgical procedures like clamping in colorectal surgery, highlighting the system’s potential in medical training [
41]. Moreover, a meta-analysis on robot-assisted surgery quantified the benefits of haptic feedback, revealing significant improvements in surgical accuracy and a reduction in the force applied by surgeons [
42]. Another innovative approach involves a sensorless feedback system that aids surgeons in distinguishing among different tissue types during laparoscopic surgery, with a high success rate in tumor identification [
43]. These studies show the effectiveness of force feedback technologies in providing realistic tactile sensations that can improve outcomes in medical settings.
These diverse approaches, ranging from tactile feedback mechanisms to advanced sensory stimulation technologies, collectively advance the field of virtual button design, offering innovative solutions that enhance user interaction in multimedia applications, especially in contexts where visual attention is limited, like driving. We hypothesized that the repulsive FF at the JND level can be used for simulating the sense of pushbutton bank switches.
3. Application of Repulsive Force Feedback at JND Level for Simulating Sense of Pushbutton Bank Switches—Case Study
We studied the use of the repulsive force feedback (FF) at the JND level when interacting with the touch-sensitive surface of a slider used for the TactoTek in-vehicle control panel. Participants were asked to explore and locate on the slider an assigned level of FF (one of seven force levels), which they felt through light touch and which had been exposed to the fingertip at the starting position, among adjacent segments separated by tactile slits and activating repulsive force that had been assigned randomly as six pressure distractors selected in a range of 20 mN–170 mN. The participants were able to identify accurately force levels that were between 40 mN and 54 mN for the linear scale and 28 mN and 33 mN for the logarithmic scale of repulsive forces. Independently, on the repulsive force scales, the values below this range were overestimated, while the values above this range were underestimated. The task is crucial to understanding the tactile discernment capability to differentiate force feedback upon exploration of touch-sensitive surfaces when the level of repulsive forces is less than 200 mN in the absence of other accompanying afferent information (visual and/or auditory). The results of studies on repulsive force modulation as an active response of the contact surface can be used to modify human perception upon exploration or interaction with a physical surface to create the illusion of surface irregularities, a specific relief or dynamic parameters (such as friction, slipperiness and flexibility) in virtual reality for the development of more realistic 3D models simulating processes, objects and environments
The number of physical buttons in vehicles has decreased in part as an effort to reduce a driver’s workload [
44,
45,
46]. In parallel with this reduction, many conventional controls have been swapped out in favor of smooth surfaces with capacitive touch interfaces as part of bright, colorful, displays [
47]. More and more functions are then added to these interfaces, culminating into numerous factors of distraction. This creates a paradoxical situation where technological and ergonomic advances may impair usability by distracting a driver’s attention away from the road. However, previous research has shown that using haptic displays to enhance user interaction with in-vehicle infotainment systems can reduce task completion time and distraction while driving [
48].
Reliability and reduced costs are some of the main reasons why car makers have moved away from physical buttons [
49,
50]. That said, so far, interaction performance with touch displays has not reached the same level as that of similar physical interfaces, nor have they even been able to mimic the physical displacement forces that would have been applied to the finger [
18,
51,
52,
53,
54].
Touch screen interfaces demand users to select and activate specific functions precisely, which requires continuous visual attention and distracts the driver from the road by increasing eyes-off-the-road time (EORT) [
55]. In recent studies, researchers [
56] have found that touchscreens represent a significant step back for auto design by worsening a driver’s performance. As these displays became more common in vehicles in the 2010s, It has been found that these systems bring with them a significantly increased crash risk [
50].
We believe that the touch-sensitive surface is a valuable technology that offers many possibilities for innovation. In our work, we have transformed a capacitive touch control panel with a central touch-sensitive slider into a single actuated surface that simulates the behavior of seven pushbutton switches. This was achieved by implementing a powerful voice-coil actuator beneath the surface that has the ability to provide strong tactile feedback to a user (
Figure 4).
The objective of this work is to evaluate the efficiency of a stiff slider surface by simulating the flexibility and the force–displacement curve of seven pushbutton switches that can be pressed and released. The main purpose is to use this simulated force feedback (FF) to render haptically enhanced virtual buttons.
3.1. Participants
Sixteen (nine males and seven females) participants (mean age = 35, SD = 10.02) took part in the test. They were healthy adults who did not report any cognitive or sensory impairment that could have potentially affected their ability to complete the tasks. Before participating in the study, each participant signed a written consent form. The study was conducted in accordance to the guidelines of the Declaration of Helsinki.
3.2. Apparatus and Stimuli
The conceptual illustration of the seven pushbutton switches displayed at the base of the Tactotek control panel is shown in
Figure 4. A powerful voice-coil actuator was placed under the center of the control panel, with the panel itself being attached to a pivot joint. Repeated force measurements were made across the panel to ensure FF was applied equally at all points on the slider. The current design requires a minimal amount of components to be affixed to the panel without great modification to the original product and without the need of springs or complicated brackets to provide linear displacement in the direction orthogonal to the fingertip. The force feedback provided was easily tuned via an Arduino microcontroller [
57], and the whole setup was calibrated in advance by using an FSR402 pressure sensor [
58].
The voice-coil arrangement consisted of an internal diameter of 16 mm and an external diameter of 35 mm, with a height of 20 mm. Six N35 neodymium magnets were stacked and placed in the center of the coil. This stack was attached directly to the underside of the Tactotek panel. A 0.35 mm copper magnet wire with approximately 500 turns was used for the coil, resulting in a measured impedance of 12.5 ohms. A maximum voltage of 16.5 VDC (total current of 1.32 amps, 21.78 watts) was applied to the coil. Force feedback was adjusted as a percentage of the voltage level assigned for each of the seven slider zones, with a range between 0.022 N and 0.17 N, with the scale between FF zones being adjusted based on a linear or logarithmic voltage output. The range of forces was chosen due to their range of sensitivity in relation to the sense of touch [
59,
60].
Figure 5 displays a chart of the two scales.
The flowchart of the usability test software is shown in
Figure 6. The collected responses and relevant parameters of signals and stimuli were recorded into log files. By starting the test on a desktop computer, the software application “Slider FF profile” creates an array of indexes of FF signals for each mode of haptic stimulation for the test. This array consists of the force feedback indexes i (F
i values) presented in
Table 1. At first, a test force feedback signal is delivered at the location marked with a red circle in
Figure 4. Seven force feedback signals, which include the test force feedback and six distractors, are presented randomly on the slider segments. The software application generates the array of the presented force feedback indexes and sends them to the Arduino microcontroller. The Arduino microcontroller assigns the FF values to the seven segments on the slider, based on the selected stimulation mode (linear or logarithmic). Fingertip location is monitored by an attached Neonode IR linear sensor [
61] with a touch accuracy of 0.1 mm and a scanning frequency of 500 Hz.
3.3. Procedure
Participants were seated in a comfortable position with their dominant hand on the prototype and non-dominant hand close to a keyboard (
Figure 4). The experiment lasted approximately 30–40 min and consisted of three sessions: one training session and two experimental sessions (logarithmic and linear presented randomly). The training session consisted of 20 trials. Each of the experimental sessions comprised 70 trials in which each FF value was presented ten times.
For each trial, a test sample was placed at a random segment position, with all other segments presenting distractors (see
Table 1 for values). During the trial, the participant was asked to place their index fingertip at the starting location marked with tape (red circle in
Figure 4). After they pressed a key on the keyboard with their non-dominant hand, the test FF sample was presented for 20 ms. The choice of 20 ms was based on pulse decay time [
62] and to avoid possible interference with successive pulses felt at the fingertip while scanning the slider [
63,
64,
65]. Once exposition time had passed, they were asked to explore the slider segments. The test sample and six distractors were presented in random segment positions. The participant’s task was to identify which of the seven segments contained the FF matching that of the test sample.
Participants could freely explore the slider in any direction with no time limits, as long as the start key was held down. Each time the person’s fingertip slid onto any segment, the indexed FF signal was presented again for 20 ms. When the participant believed that the presented FF matched the test sample, they released the start key to lock in their response and complete the trial.
Immediately after completing the tests (all 70 trials), the participants were given a NASA Task Load Index (TLX) questionnaire to fill out. NASA-TLX measures six key dimensions: mental demand, physical demand, temporal demand, performance, effort and frustration [
45]. Each dimension is rated on a 100-point scale, with the combined scores providing an overall workload index. By utilizing NASA-TLX in our study, we aimed to capture a comprehensive understanding of the participants’ cognitive and physical experiences during the study, offering additional insights into the overall user experience and interaction with the system.
3.4. Results
This study gathered a range of data to assess the participants’ interaction with force feedback (FF) levels. All participants demonstrated a clear understanding of the task, successfully identifying the designated level of FF within a randomly assigned location on the slider, amidst varying adjacent FF levels, without needing interruptions or additional guidance. The task was designed to measure the participants’ ability to accurately locate specific FF levels on the slider. This task was crucial to understanding the tactile discernment capabilities of participants in differentiating among FF intensities. The use of various FF levels allowed for a comprehensive evaluation of the sensory response across a spectrum of force feedback stimuli.
3.4.1. Accuracy of FF Level Identification
To effectively analyze participants’ performance in locating the exact FF level on the slider segments, we employed a ratio-based approach. The ratio values for each FF level under both linear and logarithmic stimulation conditions were calculated. A ratio of 1 indicated precise identification of the presented FF level, whereas ratios below 1 signified underestimation and those above 1 overestimation of the FF level. As depicted in
Figure 7, the mean accuracy and confidence intervals (CIs) at 95% for each FF level are presented, allowing for a clearer depiction of the participants’ performance. The use of this ratio metric was instrumental in quantifying the degree of accuracy in the participants’ responses.
A repeated measures ANOVA, with stimulation mode and FF level as factors, was conducted on the accuracy of responses. The results reveal significant effects for both the main factors voltage [F(1, 15) = 5.35,
p < 0.05,
= 0.26] and level [F(6, 90) = 84.95,
,
= 0.85]. All levels displayed significant differences in the participants’ responses, with the exception of levels 3 and 4 and levels 6 and 7. This non-significance in certain levels might indicate a perceptual threshold among participants, suggesting an area for further investigation. Raincloud plots, as illustrated in
Figure 8, provide a visual representation of the individual accuracy of responses and their distribution. In conjunction with
Table 2, these plots elucidate that the logarithmic voltage condition resulted in a higher tendency for participants to overestimate the presented FF, especially at levels 1, 2 and 5. This overestimation under logarithmic conditions could imply a perceptual bias or a cognitive processing difference when encountering logarithmic scales.
One-sample t-tests provided further insights, indicating that performances at levels 3 and 4 with a linear scale and level 4 with a logarithmic scale were notably accurate. Conversely, for levels 1 and 2, in both types of stimulation modes, the ratios were significantly higher than 1, indicating a consistent overestimation of the voltage presented. This could suggest that lower FF levels are more challenging to perceive accurately, leading to a natural tendency to overestimate. On the other hand, levels 5, 6 and 7 were significantly lower than 1, denoting an underestimation of the presented FF level. This underestimation at higher levels might reflect sensory saturation or a limitation in distinguishing finer gradations of force at higher intensities.
3.4.2. Time Efficiency
Regarding the time taken to respond, there was no significant difference in the main factors of FF levels and voltage conditions. The participants generally spent a similar amount of time (between 6.9 and 8.4 s) across all FF levels, suggesting a consistent cognitive load required for discerning different FF levels, irrespective of FF intensity or complexity.
Figure 9 visually represents the distribution of response times for each FF level and stimulation mode. The error bars indicate 95% confidence intervals, highlighting the variability in response times among participants. The similar response times across all FF levels reinforce the notion that the participants’ exploration and identification of FF levels was consistent across all conditions.
3.4.3. Subjective Workload Assessment
The analysis of the NASA-TLX data revealed insightful trends about the participants’ perceived workload during the task (
Table 3). Scores below 10 on NASA-TLX are considered low, and scores between 10 and 29 indicate a medium workload. The average scores for each dimension suggest a generally moderate level of perceived workload. Mental demand was slightly above average at 12.31; physical demand was low, with an average of 8.94; and temporal demand was also low, at 7.44, indicating minimal time pressure felt by participants. Performance confidence was modest (average of 10.1), the task was perceived as mildly difficult (average of 11.8), and the participants generally did not feel frustrated (average of 7.8).
A Principal Component Analysis (PCA) on the NASA-TLX items, using oblique rotation with an eigenvalue above 1, further clarified the underlying structure of the workload dimensions. The PCA was supported by a significant Bartlett’s test of sphericity (
p = 0.009) and a moderate Kaiser–Meyer–Olkin (KMO) measure of 0.63. As shown in
Table 4, two primary components were extracted: The first (RC1) strongly correlated with physical demand (0.906), mental demand (0.793) and effort (0.784), suggesting an “Overall Task Load” factor that combines physical and mental demands with general effort. Temporal demand was also moderately loaded on this component. The second component (RC2) contrasted performance (negatively loaded) with frustration (positively loaded), reflecting an “Emotional Strain vs. Task Efficacy” factor, where increased frustration aligns with lower perceived performance.
Temporal demand’s presence in both components, along with its unique variance (0.287), indicates a more complex aspect, suggesting that time influences and is influenced by both the Overall Task Load and the emotional/performance dynamics. For example, high temporal demand might increase the overall workload (physical, mental and effort) and simultaneously affect frustration levels and perceived performance. The uniqueness aspects of time could be urgency, pacing or time-related stress, which are specific to the temporal demands in any task.
At the end of the session, the participants were asked to identify devices where they could see this technology being implemented. Many participants cited game pads and phones, with a significant number pointing out its suitability for an in-car panel. This preference was expected, as the panel being tested was clearly intended for in-car usage. Participants explained that the design of the panel was intuitively appropriate to minimize visual distractions. When discussing the potential advantages of integrating variable FF values in existing devices, they noted not only an improvement in ease of use but also potential benefits for blind users, as well as a more agreeable tactile feedback compared with harsh audio signals. Upon request for further comments, a few participants provided insightful feedback, particularly regarding the ergonomics of the slick surface of the device, which has the tendency to stick, compared with a smoother, matte finish surface. The impact of different materials on touch perception has been documented in previous work [
7].
4. Discussion and Conclusions
In this paper, we investigated the repulsive force feedback (FF) at the JND level used for simulating the sense of pushbutton bank switches and the performance of participants interacting with a touch-sensitive slider. The participants were asked to locate and select different zones on the slider based on the FF levels they felt. The participants could identify accurately FF levels that were between 0.04 N and 0.054 N at the JND levels of 0.254 and 0.298 for the linear scale and those that were between 0.028 N and 0.033 N at the JND levels of 0.074 and 0.164 for the logarithmic scale. They overestimated values below this range and underestimated values above this range. The NASA-TLX scores indicate that the participants experienced a moderate level of workload during the task. While mental demand was slightly above average, physical demand and temporal demand were relatively low, suggesting that the task was more mentally taxing than physically demanding or time-pressured. The modest scores in performance and the mild effort level indicate that participants found the task somewhat challenging but not overly so. The low level of frustration further suggests that the task did not induce significant emotional strain.
Integrating these results, the system demonstrates a high level of practical efficacy. The accuracy in identifying FF levels, despite some perceptual challenges at specific levels, indicates robust sensory response capabilities. The consistent response times across varying FF intensities ensure reliable and predictable interaction, which is crucial to user satisfaction and system reliability. Moreover, the moderate workload reported by the participants suggests that the system is user-friendly and does not impose excessive cognitive or physical stress.
In the automotive industry, FF technology is being used to create more intuitive and safer control systems for drivers. For example, haptic feedback can improve the effectiveness of touch screens and controls, reducing driver distraction and increasing engagement [
66]. The current trajectory in automotive design indicates a continued and widespread adoption of capacitive touch controls in consumer vehicles, a trend that is expected to persist well into the future [
67]. Despite advancements, the reality of consumer-accessible Level IV autonomous vehicles, which would eliminate the need for an attentive driver, remains distant [
68]. This underscores the critical need for ongoing research into making existing touch controls safer and more user-friendly for drivers. Our study contributes to this area by demonstrating the potential of force feedback across the Tactotek touch-sensitive slider’s seven distinct locations to enhance the user interface.
External factors, such as ambient noise, lighting and environmental stressors, can significantly impact the effectiveness of force feedback systems. Our experiment was conducted in a controlled laboratory environment, allowing the participants to concentrate solely on the task and the interface. However, real-world driving involves several distractions and environmental factors—traffic, weather conditions, radio noise, passengers, etc. Ambient noise, for instance, can distract users or mask the auditory feedback that often accompanies haptic interfaces, potentially reducing the user’s ability to respond to tactile cues. Inadequate lighting may strain the user’s vision, leading to fatigue and decreased interaction accuracy with the force feedback system. Moreover, environmental stressors, including temperature fluctuations and vibrations, could interfere the user’s sensory perception, thus affecting performance. To ensure optimal deployment of force feedback systems under real-world conditions, it is essential to consider these external factors. Guidelines could include recommendations for noise-dampening features, adjustable lighting and robust system designs that can withstand environmental variances.
To build upon our findings and address these real-world complexities, we propose several avenues for future research. One key area involves exploring various configurations of the touch-sensitive slider, particularly experimenting with different numbers of zones to identify an optimal setup that maximizes user experience and performance.
It is essential to consider the interplay between user comfort and control accuracy. Tactile experience is greatly influenced by the materials used; soft-touch materials often enhance comfort for prolonged use, while harder materials might be favored for their precise feedback capabilities. Investigating the impact of different overlay materials on the slider could offer insights into how these materials influence the perception and discrimination of FF levels. For instance, Teflon has shown potential in enhancing vibrotactile signals [
7], suggesting that material choice could be a critical factor in design considerations.
The study’s exploration into the realm of FF technology could be significantly enriched by delving into additional applications, such as virtual reality (VR) and remote surgery. In VR, FF can transcend the visual and auditory immersion by introducing a tangible dimension, allowing users to "feel" the virtual environment, thus enhancing the realism and depth of the experience. Similarly, in the context of remote surgery, FF becomes a pivotal component, providing surgeons with the necessary tactile feedback that is otherwise lost in teleoperated procedures. This haptic information can be crucial to delicate surgical maneuvers, potentially increasing precision and reducing the risk of errors. By expanding the scope of research to include these applications, research could uncover new insights into the capabilities and limitations of FF technology.
These directions aim not only to refine touch interface technology for current vehicles but also to prepare for the evolving landscape of automotive user experience as we progress towards higher levels of vehicle autonomy.