Next Article in Journal
A Torpedo Target Recognition Method Based on the Correlation between Echo Broadening and Apparent Angle
Previous Article in Journal
A Fast Training Method of a Fabric Hand-Feel Panel under Industry Conditions, and Its Conformity with Other Human and Instrumental Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Haptic Texture Rendering of 2D Image Based on Adaptive Fractional Differential Method

1
Shenzhen Research Institute of Southeast University, Shenzhen 518038, China
2
School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, China
3
School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 12346; https://doi.org/10.3390/app122312346
Submission received: 8 November 2022 / Revised: 29 November 2022 / Accepted: 30 November 2022 / Published: 2 December 2022

Abstract

:
The fractional differential algorithm has a good effect on extracting image textures, but it is usually necessary to select an appropriate fractional differential order for textures of different scales, so we propose a novel approach for haptic texture rendering of two-dimensional (2D) images by using an adaptive fractional differential method. According to the fractional differential operator defined by the Grünvald–Letnikov derivative (G–L) and combined with the characteristics of human vision, we propose an adaptive fractional differential method based on the composite sub-band gradient vector of the sub-image obtained by wavelet decomposition of the image texture. We apply these extraction results to the haptic display system to reconstruct the three-dimensional (3D) texture force filed to render the texture surface of two-dimensional (2D) images. Based on this approach, we carry out the quantitative analysis of the haptic texture rendering of 2D images by using the multi-scale structural similarity (MS-SSIM) and image information entropy. Experimental results show that this method can extract the texture features well and achieve the best texture force file for 2D images.

1. Introduction

Haptic texture is a crucial cue to inform users of an interaction state with the surface texture, i.e., fine geometric surface features of an object. Haptic textures have received substantial attention in various fields [1,2,3]. Previous researchers have shown that the addition of haptic texture cues can greatly improve the realism of the virtual environment [4]. Therefore, the haptic texture rendering technique has several potential applications, such as virtual surgical simulation, haptic feedback teleoperation, on-line e-commerce, and aiding the visual impaired [5,6,7].
In general, there are three basic methods to render haptic textures. The first is the sample-based approach. S. Andrews introduced a system which is based on a tactile probe and a visual tracker for scanning and synthesizing tactile textures [8]. H. Vasudevan estimated the surface texture according to the frequency spectrum of vertical perturbations by dragging the tip of the tactile device on the object surface [9]. A. Song developed a PVDF-based haptic texture sensor by imitating human active texture perception to measure real object surface texture for haptic texture rendering in virtual reality [10]. V. Bove used holograms to record the surfaces and textures of objects in a holo–haptic system. The produced haptic images were felt and shaped by a handheld device [11]. The second is the procedural texture approach which uses mathematical functions to synthesize the surface texture of objects. Several related researchers were summarized in detail [12,13,14]. The third is the image-based approach. This method constructs a texture force field from 2D image data. Therefore, this kind of haptic texturing approach can also be considered as a type of VR [15]. L. M. Benjamin computed an elevation map based on the luminance coefficient for Texel images using four different techniques. These techniques were based on the fact that the height value of the pixel is proportional to the luminance value. Then, the elevation map is used to generate 3D bumps on the surface of the detected object, and calculate the corresponding tactile force for haptic rendering [16]. J. Wu and A. Song processed the 2D images with a Gaussian filter to obtain low-frequency components, and then subtracted the filtered image from the original image; the left components denote the texture information. The forces simulated from textures were applied to the user using a Delta haptic device [17]. S. Xu proposed an image-based haptic texture generation approach by replacing the Gaussian filter with an improved switching vector median filter for modeling the textured force and simulating the haptic stimuli [18]. Vasudevan used the conventional edge detection algorithms and proposed a design method of haptic mask to allow the user to feel the contours and textures of the image using haptic devices [19]. J. Lu and A. Song presented a haptic texture rendering method based on color temperature and luminance to construct 3D texture force fields of 2D color images [20]. E. R. Vimina and Divya proposed a fixed size descriptor based on local strength for texture calculation, and further expanded texture information by multi-channel color data collection [21].
Owing to the fact that the image-based haptic texture rendering approach has potential advantages of cost-effective realization, it has attracted substantial attention from researchers. However, the existing haptic texture rendering methods have problems in dealing with the images with fine geometric texture features.
In this paper, we propose a novel fractional differential method for image-based haptic texture rendering. Fractional differentiation is currently a new tool for image signal processing. Complex texture details will show highly self-similar fractal information in image signals, and the mathematical basis of fractal theory includes fractional differentiation [22,23]. Therefore, the features of complex texture in the image can be extracted by fractional differentiation and applied to the haptic texture reconstruction.
This paper introduces the Grünvald–Letnikov (G–L) definition of fractional differential in Euclidean space [24,25]. Based on the G–L definition, the isotropic m × n fractional differential mask is deduced. Then we propose a novel method to adaptively select the order of a fractional differential operator by using the composite sub-band gradient vector (CSGV) relate with the wavelet decomposition [26] and human visual characteristics [27,28]. Thirdly, we apply the approach in haptic texture rendering and give a quantitative analysis by using the image information entropy and multi-scale structural similarity (MS-SSIM). We apply these extraction results to the haptic display system to reconstruct the three-dimensional texture force filed to render the texture surface of 2D images. Finally, some experiments were carried out on different types of texture images. Experimental results show that the proposed haptic texture rendering method based on the adaptive fractional difference can extract texture features well and obtain an excellent texture force field of 2D images.

2. The Advantage of Fractional Differential

Fractional differential processing of signals can not only enhance the high-frequency components of a signal nonlinearly, but also enhance the intermediate frequency components of the signal nonlinearly to a certain extent, while retaining the low-frequency components of the signal nonlinearly [29,30]. Using this property of fractional differentiation, we can preserve the low-frequency contour information of digital images while nonlinearly augmenting high-frequency detailed texture patterns with wider gray distributions. Finally, the enhanced image is subtracted from the original image to obtain the texture extraction result.
The Grünvald–Letnikov (G–L) definition, Kaputo definition, and Riemann Liouville (RL) definition are three commonly used definitions of fractional differential under Euclidean Metric [29]. Recent research indicates that the implementation of differentials in digital image processing is almost always based on the G–L concept. The Tiansi operator mask that is constructed according to the definition of G–L will actually have inaccuracy in image processing, because it is discrete in digital images and is an approximate expression of functions, so the effect of image texture extraction is often unsatisfactory.
Therefore, we add adaptive augmentation to the G–L fractional differentiation definition, and find it is suitable for image texture acquisition.
The low-frequency components of the image can be well preserved under the fractional differential operator mask filtering. The output gradation value is dramatically enhanced for nearby pixels whose gray value fluctuates rapidly in the area (including picture borders and texture regions), showing that the fractional differential operator mask may significantly improve the original image’s high-frequency component.

3. Differential Order for Adaptive Selection Algorithm

Texture is one of the most essential image processing and analysis properties. Texture gives intuitive assessments of qualities, such as regularity, coarseness, and smoothness. The majority of texture analysis methods analyze the picture at a single scale. As revealed by J. Beck et al. [31], the visual cortex may be modeled as multiple channels, where each channel can perceive a specific direction and frequency tuning. Multi-scale texture analysis techniques are propelled by multichannel processing. Several multichannel texture analysis systems have been suggested [32,33]. In the past 10 years, the rapid development of wavelet theory has also brought new theories and methods to the field of image processing. I. Daubechies suggested a discretization approach for wavelet transform [34]. The relationship between multiresolution theory and wavelet transforms was further developed by S. G. Mallat [35]. Since then, wavelet theory has developed into a multi-scale (multi-resolution) mathematical tool in image analysis. The use of multi-scale methods in texture image analysis is based on the premise that lower resolution channels can better record “large” textures, while higher resolution channels can better record “small” textures.
The following describes the process involved. Apply wavelet and scaling filters to the image both horizontally and vertically, then sub-sample each output image by 2-1. This produces a coarse or approximate image Cj and three detail images with direction-selectivity Dj,k, where k = 1,2,3 and j represents the decomposition level. The same method is utilized to construct the following level of hierarchy resolution. Therefore, the hierarchical wavelet decomposition of the image is expressed as:
{ C j = [ H x [ H y C j 1 ] 2 , 1 ] 1 , 2 D j , 1 = [ H x [ G y C j 1 ] 2 , 1 ] 1 , 2 D j , 2 = [ G x [ H y C j 1 ] 2 , 1 ] 1 , 2 D j , 3 = [ G x [ G y C j 1 ] 2 , 1 ] 1 , 2
where C0 = I represents the original image, ↓1,2 means down-sampling every other pixel in the y direction, ↓2,1 means down-sampling every other pixel in the x direction, and * means the convolution operator. Gx and Hy and Gy and Hy represent high-pass and low-pass filters in the x and y directions, respectively. Therefore, the original image can be represented by a series of sub-images of multiple scales, {Cj, Dj, k} (j = 1, …, J; k = 1, 2, 3) is the multi-scale representation of image I at depth J. DAUB4 is used as the wavelet basis in this instance because of its superior average performance.
Gradient direction image may provide texture analysis with useful properties [36]. After the image is processed by the gradient operator, the change amplitude and direction of the pixel gray value can be obtained, and the image gradient describes the change trend of the image in different directions. A combination of a series of low-pass (H) and high-pass (G) filters enables wavelet decomposition, using multiple sets of filters for sampling, each set at half the sampling frequency of the previous set. Therefore, the original image can be processed to obtain four sub-images, namely:
  • LL sub-image: low frequencies in both x and y directions.
  • LH sub-image: low frequencies in the x direction and high frequencies in the y direction.
  • HL sub-image: high frequencies in the x direction and low frequencies in the y direction.
  • HH sub-image: high frequencies in both x and y directions.
LL, LH, HL, and HH are four sub-images obtained by wavelet decomposition. A gradient vector is constructed for each sub-image, denoted by S G V 1 , S G V 2 , S G V 3 , S G V 4 , respectively. We define C S G V = S G V 1 | | S G V 2 | | S G V 3 | | S G V 4 , where | | is a “superimpose” operation. Therefore, CSGV can better describe the expression of image texture than the original gradient vector.
Studying the human visual characteristics reveals that the sensitivity of the human eye to the gray value in the range of 0~255 in the gray image is not constant. When the gray value is particularly high or low, it is difficult for the human eye to perceive the grayscale change of the intensity value. In the vicinity of gray level 0, the human eye can only feel the change of gray level 8, while in the vicinity of gray level 255, the human eye can only feel the change of gray level 3, and when the gray level is 128, the human eye can perceive changes in 2 gray levels [37]. In digital image research, the gradient size of a certain point in the image is calculated by the change rate of the gray value of the point. The above-mentioned gradient coincidence vector CSGV represents the gray value change rate of the image at multiple scales. Therefore, we regard the pixels with CSGV less than 2 as a region with constant grayscale, and the differential order is 0; the pixels with CSGV in the range of 2~128 are regarded as regions with small grayscale changes, and appropriately increasing the differential order can enhance the human eye’s perception of fine textures; while the pixels with CSGV greater than 128 are usually edge contour areas, it must have a correctly limited gradient interval, and the differential order should be appropriately reduced. From the above analysis, we have established the function of fractional derivative γ and CSGV:
γ = { 1 max ( S G V ) + ε 1 C S G V + { ε 2 }   , C S G V > 128 0 , C S G V < 2
Among them, ε 1 is any positive number; max ( S G V ) is the maximum value of S G V of all pixels in the image; ε 2 is an artificially set, and its purpose is to enhance the effect of the center pixel on the neighboring pixels. The condition of ε 2 should satisfy the following formula to ensure that the value of the differential order γ does not exceed 1:
ε 2 < 1 1 max ( S G V ) + ε 1 C S G V
When C S G V > 128 , ε 1 = max ( S G V ) , from Equation (3) ε 2 < 1 2 , so take ε 2 = 0.499. When 2   <   C S G V < 128 , intended to use ε 1 = 2 max ( S G V ) , ε 2 < 2 3 , so take ε 2 = 0.666. Therefore, the relationship between the differential order γ and CSGV is expressed as:
γ = { 1 2 max ( S G V ) C S G V + 0.499 , 128 < C S G V 1 3 max ( S G V ) C S G V + 0.666 , 2 C S G V 128 0 , C S G V < 2
According to these equations, the gray value varies drastically along the image’s edge contour, and the CSGV is bigger, thus γ should be reduced accordingly. For densely textured areas, the grayscale variation and CSGV are small, so the fractional derivative γ obtained is appropriately increased. For areas where the gray value does not change or changes very little, γ is 0 and no processing is performed to maintain the gray value.

4. Texture Extraction Performance Evaluation

This section aims at demonstrating that the proposed adaptive algorithm based on CSGV has better capability using in the texture extraction.
For fractional differential G–L, we set the 0.3, 0.55, and 0.7 as the fixed fractional differential order. In order to analyze and compare the differences in the ability to obtain texture information between the adaptive order differential and the specified fractional order differential, five groups of comparative experiments were conducted, and their results are shown in Figure 1. These results of five sets of comparative experiments show the advantages of fractional difference in extracting complex texture information. Although the 0.3-order differential extraction effect can preserve the detailed texture well, the texture extraction effect is too weak to be clearly displayed on the picture (as shown in Figure 1(a1,b1,c1,d1,e1). However, as the differential order increases (between 0 and 1), image enhancement promotes “large” textures to sharper images, but “small” textures are lost. The adaptive differential order selection algorithm adopted in this paper can retain almost all texture details to achieve the best texture extraction among five groups (as shown in Figure 1(a4,b4,c4,d4,e4).
Multi-scale structural similarity and information entropy (MS-SSIM) is used as an evaluation criterion to evaluate the effect of texture information extraction in images [38,39]. Among them, information entropy can indirectly reflect the amount of information contained in grayscale images, and is very sensitive to images containing textures. Therefore, we adopt the calculation method of information entropy to analyze the amount of information in the image texture after extracting the image texture features. The information entropy is calculated as follows:
E ( p ) = i = 1 N p i ln ( p i )
Because the human vision system is very suitable for extracting structural information from scenes, measurement results with similar structures can provide a reference for whether image quality is good for perception. The MS-SSIM value of an image is usually used to compare the quality of two images and is an objective evaluation method that replaces the human subjective perception [39]. We use it to compare the differences between the extraction results after using the adaptive method and the specified order differential method, thereby further verifying that the texture extraction effect of the adaptive method has obvious advantages.
A comparison method for structure, brightness, and contrast is given in Ref. [39]:
{ l ( x , y ) = 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1 c ( x , y ) = 2 σ x σ y + C 2 σ x 2 + σ y 2 + C 2 s ( x , y ) = 2 σ x y + C 3 σ x σ y + C 3
where x = { x i | i = 1 , 2 , , N } , y = { y i | i = 1 , 2 , , N } are two image patches extracted from the same spatial location from two images, respectively. And μ x , σ x 2 and σ x y are the mean of x , the variance of x , and the covariance of x and y , respectively. C1, C2 and C3 are small constants given by
C 1 = ( K 1 L ) 2 , C 2 = ( K 2 L ) 2 , C 3 = C 2 / 2
where L is the dynamic range of the pixel values (L = 255 for 8 bits/pixel gray scale images), and K1 << 1 and K2 << 1 are the two scalar constants.
The procedure of the MS-SSIM method for image structural similarity assessment is illustrated in Figure 2. The two images to be compared are used as input signals, and then the low-pass filter is applied for iteration, and the filtered image is down sampled by factor 2. The original image index is scale 1, and the highest index is scale M. At the j-th scale, the contrast comparison c j ( x , y ) and the structure comparison s j ( x , y ) are calculated, respectively. The luminance comparison is computed only at scale M as l m ( x , y ) . Thus, the MS-SSIM evaluation is obtained by combining the measurements at different scales using:
S S I M   ( x , y ) = [ l M ( x , y ) ] α j = 1 M [ c j ( x , y ) ] β [ s j ( x , y ) ] γ
where α , β and γ are parameters to define the relative importance of the three components. To simplify parameter selection, we set α = β = γ = 1.

5. Texture Extraction Results Analysis

The analysis and comparison results are shown in Figure 3. The information entropy obtained by the adaptive method is marked in blue, and the red curve shows the information entropy obtained from order 0.05~0.95 (step length is 0.05 order). It can be seen that the result of the adaptive method is close to order 0.5~0.7, which is the fractional order interval with the best extraction effect. The grey curve shows the structural similarity of the adaptive results. When the MS-SSIM value of the extraction result of each specified order is closer to 1, it is closer to the adaptive result. From Figure 3, we find that the extraction results of order 0.5~0.7 are most similar to the results of the adaptive method. The comparison result of Figure 3 is the same as that of Figure 1. Quantitative analysis shows that the adaptive method can improve the effect of texture extraction with less losing of texture details.
Using the statistics constructed based on the gray level co-occurrence (GLCM) matrix to calculate the physical information of the texture image, and select four commonly used statistics (Set the offset to 1 and the direction to [0°, 45°, 90°, 135°]), as shown in Table 1, where the rows correspond to the pictures in Figure 1, and the columns correspond to the directions.
AMS is a measure of the uniformity of image grayscale distribution and texture thickness. Entropy measures the randomness contained in an image and expresses the complexity of image texture. The contrast reflects the clarity of the image and the depth of the texture. The more obvious the “large” texture, the greater the contrast. The correlation reflects the consistency of the image texture.
After the texture image is processed by the adaptive fractional differential method, the “small” texture is significantly enhanced, so the values of AMS and entropy are increased. The gray value of the “small” texture is closer to that of the “large” texture than before, so the contrast value is reduced.

6. Haptic Texture Rendering Model

Haptic texture rendering is a method to reconstruct the surface attributes of virtual objects according to force fields or force vectors, so that users can feel the surface texture of virtual objects haptically by using haptic devices (such as Phantom, Force-Dimension Delta hand controller, etc.). In this section, we use the adaptive fractional differentiation method to provide a new tactile texture model based on the image texture extraction results.
The texture force vector F ( i ) at each pixel of an image can be modeled as the combination of normal force vector F N ( i ) and tangential force vector F T ( i ) as
F ( i ) = F N ( i ) + F T ( i )
The tangential force vector of the image is calculated based on the following assumption. There exists an interaction force between any two pixels in the gray image after processed by the proposed adaptive fractional differential method. The interaction force between any two pixels Pi and Pj is proportional to the absolute vale of the difference of the two pixel’s gray values, and inversely proportional to the distance between two pixels. The direction of interaction force vector is defined as from pixel with high luminance value to the pixel with low luminance value.
F i j = | G ( p i ) G ( p j ) | p i p j r i , j
where p i p j is distance between two pixels p i and p j , and G ( p i ) , G ( p j ) are grey value of pixels p i and p j , respectively. r i , j denotes the direction from high luminance pixel to low luminance pixel.
If the distance between two pixels p i and p j is small, the difference of the two pixel’s gray values will cause a big interaction force, and vise versa. Thus, the force vector of the pixel p i can be defined as the vector sum of all interaction force vectors from pixels within n × n neighborhood N to the center pixel p i .
F i = p j N p j p i F i j
The force vector of each pixel is a two-dimensional vector associated with pixel grey change direction of image, as illustrated in Figure 4. Figure 4b is an amplified picture of the small red square area in Figure 4a, which shows the tangential force vectors of some pixels computed by Equation (11) with the 3 × 3 neighborhood N. Here, the arrow direction represents the direction of force vector while the arrow length represents the amplitude of force vector.
When the grey values of neighbor pixels change more greatly, the pixel force vector is larger, and vice versa. Since this pixel force vector is consistent with the texture information, it can be regarded as the texture tangential force component.
F T ( i ) = F i = p j N p j p i F i j
According to the psychological principle of the human sense of color and space, when a person observes the environment, he always feels the brighter object is closer to him than the darker object [40]. Therefore, we can define the normal force vector F N ( i ) to be proportional to the image pixel gray value as
F N ( i ) = c × G ( p i ) + f w a l l
where c is a proportion factor, and fwall denotes the constraint force of the object surface. The above equation implies that if a portion of an image is brighter, then the rendered normal force is larger, which provides the user with feeling of bump when the virtual object is touched; and if the case is darker, then the rendered normal force is smaller, which expresses feelings of being shallow.

7. Experiment

The experimental system for haptic texture rendering of 2D images consists of a Phantom Omini haptic device and a computer, shown in Figure 5. The Phantom Omini haptic device has a six degrees of freedom position/attitude detection and a three degrees of freedom force feedback with the maximum force of 3.3 N. Its workspace is 160 mm width × 120 mm height × 70 mm depth with location accuracy 0.055 mm.
In this experimental system, we selected five images with 500 × 500 pixels from the Brodatz texture image database [41] and used the proposed adaptive fractional differential method to extract the texture features, shown in Table 2. Then, we used the proposed haptic texture rendering model to render the object surface based on the extracted texture. In order to verify the effect of our method, 20 volunteer subjects (10 male and 10 female, aged 21 to 31) is randomly selected to perform the texture perception experiments. The subject just haptically felt the surface of 25 2D images, which are stochastically produced by a computer one by one, and classified them into 5 types of images by using the Phantom Omini haptic device without any visual information of the 2D image.
The appearance of the original image and the texture image are hidden, and only the calculated texture force is mapped to a smooth virtual plane of 500 × 500 pixels. The calculation of the constraint force of the virtual plane is based on Hooke’s law. When the volunteers “groped” the “blank” virtual plane, the hand controller fed back the texture force of the image together with the constraint force of the virtual plane to the subject. Volunteers need to select the image to be reproduced by force/haptic sense of texture from five original images according to the perceived texture, and then count the correct rate of volunteers’ perception of each image.
The experimental results show that the average classification accuracy of 5 types of images based on haptic feelings are 87%, 72%, 81%, 91%, and 83%, shown in Figure 6. It is obvious that the proposed haptic texture rendering method can help users understand the texture contents of the image. So, it is an effective approach of image-based haptic texture rendering.
Further, we conducted another group of experiments, we selected four 2D images, and extracted their texture features through the proposed adaptive algorithm, shown in Table 2. It can be seen from the extraction results that the image texture extracted by the adaptive fractional differentiation algorithm is clear and the details are retained completely, which indicate the excellent effect of the adaptive fractional differentiation algorithm in extracting the detail texture.
The TV-Gabor model is used for comparison (the TV-Gabor model decomposes the image by using the prior conditions of frequency and texture direction, so as to distinguish the contour shape of the image body from the texture part), and selected 4 real object images with textures, shown in Table 3. Similarly, we used the proposed Haptic texture rendering model to render the object surface based on the extracted texture. In the cases where the volunteers were blindfolded, we calculated the correct rate of each image recognition, shown in Figure 7. It is verified again that the texture feature outputs by the tactile texture rendering model conform to the more realistic perception of volunteers.

8. Conclusions

In this paper, a haptic texture rendering method for 2D images based on a novel adaptive fractional differentiation has been described. The optimal order of fractional differentiation operator has been adaptively selected by using the CSGV after wavelet decomposition and the human visual features. Additionally, the quantitative analysis method based on image information entropy and multi-scale structure similarity (MS-SSIM) has been proposed to evaluate the results of texture feature extraction. On this basis, we have provided a novel haptic texture model; these extraction results were used for reconstructing the three-dimensional texture force filed to render the texture surface of 2D images. The experimental results show an average classification accuracy improvement of 0.4dB compared to the established technique (TV-Gabor), and verify the proposed haptic texture rendering method based on this adaptive fractional differential can extract the texture features well and achieve the best texture force file for 2D images. It is an effective approach to enhance image texture and also can improve the human ability of haptic texture perception in image-based haptic display system.

Author Contributions

Conceptualization and methodology, A.S. and H.H.; formal analysis, A.S. and H.H.; supervision, A.S.; writing—original draft preparation, A.S.; writing and editing, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Shenzhen Virtual University Park Basic Research Project under grant no.2021Szvup025.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in https://www.ux.uis.no/~tranden/brodatz.html (accessed on 7 July 2020).

Acknowledgments

The authors would like to thank editor-in-chief, editor, and anonymous reviewers for their valuable reviews.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Choi, S.; Tan, H.Z. An analysis of perceptual instability during haptic texture rendering. In Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Orlando, FL, USA, 24–25 March; pp. 129–136. [CrossRef] [Green Version]
  2. Jin, R.; Skedung, L.; Cazeneuve, C.; Chang, J.C.; Rutland, M.W.; Ruths, M.; Luengo, G.S. Bioinspired Self-Assembled 3D Patterned Polymer Textures as Skin Coatings Models: Tribology and Tactile Behavior. Biotribology 2020, 24, 100151. [Google Scholar] [CrossRef]
  3. Jarocka, E.; Pruszynski, J.A.; Johansson, R.S. Human Touch Receptors Are Sensitive to Spatial Details on the Scale of Single Fingerprint Ridges. J. Neurosci. 2021, 41, 3622–3634. [Google Scholar] [CrossRef] [PubMed]
  4. Hassan, A.W.; Abdulali, M.; Abdullah, S.C.; Jeon, A.S. Towards Universal Haptic Library: Library-Based Haptic Texture Assignment Using Image Texture and Perceptual Space. IEEE Trans. Haptics 2018, 11, 291–303. [Google Scholar] [CrossRef] [PubMed]
  5. Spillmann, S.J.; Harders, T.M. Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator. IEEE Trans. Vis. Comput. Graph. 2013, 19, 626–633. [Google Scholar] [CrossRef] [PubMed]
  6. Li, A.J.; Zhang, S.X. Image-based haptic texture rendering. In Proceedings of the 9th ACM SIGGRAPH Conference on Virtual-Reality Continuum and its Applications in Industry, Seoul, Republic of Korea, 12–13 December 2010; pp. 237–242. [Google Scholar]
  7. Janssen, M.J.; Huisman, M.; Van Dijk, J.P.M.; Ruijssenaars, W.A.J.J.M. Touching textures in different tasks by a woman with congenital deaf-blindness. J. Vis. Impair. Blind. 2012, 106, 739–745. [Google Scholar] [CrossRef]
  8. Andrews, S.; Lang, J. Haptic texturing based on real-world samples. In Proceedings of the IEEE International Workshop on Haptic Audio Visual Environments and Their Applications, Ottawa, ON, Canada, 22–23 October 2007; pp. 142–147. [Google Scholar]
  9. Vasudevan, H.; Manivannan, M. Recordable haptic textures. In Proceedings of the IEEE International Workshop on Haptic Audio Visual Environments and Their Applications, Ottawa, ON, Canada, 4–5 November 2006; pp. 130–133. [Google Scholar]
  10. Song, Y.A.; Han, H.; Tian, H.L.; Wu, J. Active perception-based haptic texture sensor. Sens. Mater. 2013, 25, 1–15. [Google Scholar]
  11. Bove, J.V.M.; Plesniak, W.J.; Quentmeyer, T.; Barabas, J. Real-time holographic video images with commodity PC hardware. Proc. SPIE 2005, 5664, 255–262. [Google Scholar]
  12. Lu, S.H.; Zheng, M.L.; Fontaine, M.C.; Nikolaidis, S.; Culbertson, H. Preference-Driven Texture Modeling through Interactive Generation and Search. IEEE Trans. Haptics 2022, 15, 508–520. [Google Scholar] [CrossRef]
  13. Halabi, O.; Khattak, G. Generating haptic texture using solid noise. Displays 2021, 69, 102048. [Google Scholar] [CrossRef]
  14. Friesen, R.F.; Klatzky, R.L.; Peshkin, M.A.; Colgate, J.E. Building a Navigable Fine Texture Design Space. IEEE Trans. Haptics 2021, 14, 897–906. [Google Scholar] [CrossRef]
  15. Kanade, T.; Narayanan, P.J.; Rander, P.W. Virtualized reality: Concepts and early results. In Proceedings of the IEEE Workshop on Representation of Visual Scenes, Cambridge, MA, USA, 24 June 1995; pp. 69–76. [Google Scholar]
  16. Benjamin, L.M.; Kheddar, A. A simple way of integrating texture in virtual environments for haptic rendering. In Proceedings of the 12th International Conference on Advanced Robotics, Seattle, WA, USA, 8–11 August 2005; pp. 755–760. [Google Scholar]
  17. Wu, A.J.; Song, C.; Zou, A. Novel haptic texture display based on image processing. In Proceedings of the 2007 IEEE International Conference on Robotics and Biomimetics, Sanya, China, 15–18 December 2007; pp. 1315–1320. [Google Scholar]
  18. Xu, S.; Li, C.; Hu, L.; Jiang, S.; Liu, X.P. An improved switching vector median filter for image-based haptic texture generation. In Proceedings of the 5th International Congress on Image and Signal Processing, Chongqing, China, 23–25 October 2012; pp. 1195–1199. [Google Scholar]
  19. Vasudevan, H.; Manivannan, M. Tangible images: Runtime generation of haptic textures from images. In Proceedings of the IEEE Symposium on Haptic interfaces for virtual environment and teleoperator systems, Reno, NV, USA, 13–14 March 2008; pp. 357–360. [Google Scholar]
  20. Li, J.; Song, A.; Zhang, X. Haptic texture rendering using single texture image. In Proceedings of the IEEE International Symposium on Computational Intelligence and Design, Washington, DC, USA, 29 October 2010; pp. 7–10. [Google Scholar]
  21. Vimina, E.R.; Divya, M.O. Maximal multi-channel local binary pattern with colour information for CBIR. Multimed. Tools Appl. 2020, 79, 25357–25377. [Google Scholar] [CrossRef]
  22. Oldham, K.B.; Spanier, J. The Fractional Calculus; Academic Press: Cambridge, MA, USA, 1974. [Google Scholar]
  23. Falconer, K. Fractal Geometry—Mathematical Foundations and Applications, 2nd ed.; John Wiley & Sons Ltd.: Hoboken, NJ, USA, 2003. [Google Scholar]
  24. Rakhshan, S.A.; Kamyad, A.V.; Effati, S. An efficient method to solve a fractional differential equation by using linear programming and its application to an optimal control problem. J. Vib. Control 2016, 22, 2120–2134. [Google Scholar] [CrossRef]
  25. Zou, Q.; Jin, Q.; Zhang, R. Design of fractional order predictive functional control for fractional industrial processes. Chemom. Intell. Lab. Syst. 2016, 152, 34–41. [Google Scholar] [CrossRef]
  26. Zhang, Y.Z.; Yang, L.J.; Li, Y. A Novel Adaptive Fractional Differential Active Contour Image Segmentation Method. Fractal Fract. 2022, 6, 579. [Google Scholar] [CrossRef]
  27. Coleman, S.A.; Suganthan, S.; Scotney, B.W. Gradient operators for feature extraction and characterization in range images. Pattern Recognit. Lett. 2010, 31, 1028–1040. [Google Scholar] [CrossRef]
  28. Fang, A.Q.; Zhao, X.B.; Yang, J.Q.; Zhang, Y.N.; Zheng, X. Non-linear and selective fusion of cross-modal images. Pattern Recognit. 2021, 119, 108042. [Google Scholar] [CrossRef]
  29. Ortigueira, M.D.; Machado, J.A.T. What is a fractional derivative? J. Comput. Phys. 2015, 293, 4–13. [Google Scholar] [CrossRef]
  30. Wang, W.X.; Li, W.S.; Yu, X. Fractional differential algorithms for rock fracture images. Imaging Sci. J. 2012, 60, 103–111. [Google Scholar] [CrossRef]
  31. Beck, J.; Sutter, A.; Ivry, R. Spatial frequency channels and perceptual grouping in texture segregation. Comput. Vis. Graph. Image Process. 1987, 37, 299–325. [Google Scholar] [CrossRef]
  32. Bovik, A. Analysis of multichannel narrow band filters for image texture segmentation. IEEE Trans. Signal Process. 1991, 39, 2025–2043. [Google Scholar] [CrossRef]
  33. Jain, A.K. Learning texture discrimination masks. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 195–205. [Google Scholar] [CrossRef]
  34. Daubechies, I. The wavelet transform, time-frequency localization and signal analysis. IEEE Trans. Inf. Theory 1990, 36, 961–1005. [Google Scholar] [CrossRef] [Green Version]
  35. Mallat, S.G. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef] [Green Version]
  36. Fountain, S.; Tan, T. Efficient rotation invariant texture features for content-based image retrieval. Pattern Recognit. 1998, 31, 1725–1732. [Google Scholar] [CrossRef]
  37. Liu, H.; Huang, K. A medical image processing method based on human eye visual property. Opto-Electron. Eng. 2001, 28, 38–41. [Google Scholar]
  38. Ubriaco, M.R. Entropies based on fractional calculus. Phys. Lett. A 2009, 373, 2516–2519. [Google Scholar] [CrossRef]
  39. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multi-scale structural similarity for image quality assessment. In Proceedings of the 37th IEEE Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 9–12 November 2003; pp. 1398–1402. [Google Scholar]
  40. Zhu, B. Applied Psychology; Tsinghua University Press: Beijing, China, 2004. [Google Scholar]
  41. Brodatz Texture Database. Available online: http://www.ux.uis.no/~tranden/brodatz.html (accessed on 7 July 2020).
Figure 1. Texture extraction comparison tests: (a0e0) are the original image; (a1e1) are the 0.3-order fractional derivative; (a2e2) are the 0.55-order fractional derivative; (a3e3) are the 0.7th order fractional derivative; (a4e4) are the adaptive fractional differentiation.
Figure 1. Texture extraction comparison tests: (a0e0) are the original image; (a1e1) are the 0.3-order fractional derivative; (a2e2) are the 0.55-order fractional derivative; (a3e3) are the 0.7th order fractional derivative; (a4e4) are the adaptive fractional differentiation.
Applsci 12 12346 g001
Figure 2. Multi-scale structural similarity measurement procedure.
Figure 2. Multi-scale structural similarity measurement procedure.
Applsci 12 12346 g002
Figure 3. Quantitative comparison between adaptive methods and different fractional orders.
Figure 3. Quantitative comparison between adaptive methods and different fractional orders.
Applsci 12 12346 g003
Figure 4. Two-dimensional texture tangential force vectors of grey image. (a) texture feature extracted from “Lena” image using fractional differential method (v = 0.55); (b) texture tangential force vectors in amplified red square area in (a).
Figure 4. Two-dimensional texture tangential force vectors of grey image. (a) texture feature extracted from “Lena” image using fractional differential method (v = 0.55); (b) texture tangential force vectors in amplified red square area in (a).
Applsci 12 12346 g004
Figure 5. The experimental system of haptic texture rendering of 2D image.
Figure 5. The experimental system of haptic texture rendering of 2D image.
Applsci 12 12346 g005
Figure 6. Experiment results of image classification using haptic texture feeling.
Figure 6. Experiment results of image classification using haptic texture feeling.
Applsci 12 12346 g006
Figure 7. Experimental results of image classification with different haptic extraction algorithms.
Figure 7. Experimental results of image classification with different haptic extraction algorithms.
Applsci 12 12346 g007
Table 1. Physical information of the texture image.
Table 1. Physical information of the texture image.
Original Images (a0–a4)Extracted Textures (e0–e4)
AMS [ 0.00015 0.00013 0.00013 0.00011 ] [ 0.00040 0.00028 0.00037 0.00029 ] [ 0.00011 0.00009 0.00011 0.00010 ] [ 0.00027 0.00023 0.00036 0.00023 [ 0.00066 0.00041 0.00061 0.00040 ] [ 0.00038 0.00031 0.00034 0.00030 ] [ 0.00075 0.00060 0.00077 0.00061 ] [ 0.00011 0.00009 0.00010 0.00009 ] [ 0.00052 0.00048 0.00061 0.00047 ] [ 0.00273 0.00187 0.00245 0.00188 ]
Entropy [ 0.0120 0.0112 0.0115 0.0105 ] [ 0.0201 0.0167 0.0192 0.0169 ] [ 0.0105 0.0094 0.0103 0.0099 ] [ 0.0162 0.0153 0.0189 0.0151 ] [ 0.0258 0.0202 0.0247 0.0200 ] [ 0.0195 0.0175 0.0183 0.0174 ] [ 0.0274 0.0246 0.0277 0.0246 ] [ 0.0104 0.0096 0.0102 0.0100 ] [ 0.0228 0.0219 0.0247 0.0217 ] [ 0.0522 0.0433 0.0495 0.0433 ]
Contrast [ 457.73 657.91 531.97 1177.25 ] [ 202.81 782.97 616.01 780.25 ] [ 555.80 1184.74 621.75 940.61 ] [ 738.38 970.86 544.27 1283.43 ] [ 383.83 601.95 247.00 644.16 ] [ 238.31 479.46 341.27 547.85 ] [ 122.90 238.49 134.69 243.24 ] [ 1117.87 2451.20 1251.64 1540.86 ] [ 221.82 293.59 165.43 323.46 ] [ 89.14 135.43 54.82 136.63 ]
Correlation [ 0.843 0.774 0.817 0.596 ] [ 0.897 0.602 0.687 0.604 ] [ 0.888 0.76 0.875 0.811 ] [ 0.790 0.723 0.846 0.635 ] [ 0.879 0.812 0.923 0.799 ] [ 0.608 0.213 0.439 0.101 ] [ 0.647 0.316 0.614 0.303 ] [ 0.627 0.183 0.582 0.487 ] [ 0.465 0.294 0.603 0.222 ] [ 0.610 0.407 0.761 0.401 ]
Table 2. Texture feature extracting of 2D images using adaptive fractional differential method.
Table 2. Texture feature extracting of 2D images using adaptive fractional differential method.
Image 1Image 2Image 3Image 4Image 5
Original imagesApplsci 12 12346 i001Applsci 12 12346 i002Applsci 12 12346 i003Applsci 12 12346 i004Applsci 12 12346 i005
Extracted texturesApplsci 12 12346 i006Applsci 12 12346 i007Applsci 12 12346 i008Applsci 12 12346 i009Applsci 12 12346 i010
Table 3. Comparison of image classification experiments with different haptic extraction algorithms.
Table 3. Comparison of image classification experiments with different haptic extraction algorithms.
Image 1Image 2Image 3Image 4
Original imagesApplsci 12 12346 i011Applsci 12 12346 i012Applsci 12 12346 i013Applsci 12 12346 i014
Adaptive methodApplsci 12 12346 i015Applsci 12 12346 i016Applsci 12 12346 i017Applsci 12 12346 i018
TV-GaborApplsci 12 12346 i019Applsci 12 12346 i020Applsci 12 12346 i021Applsci 12 12346 i022
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, H.; Song, A. Haptic Texture Rendering of 2D Image Based on Adaptive Fractional Differential Method. Appl. Sci. 2022, 12, 12346. https://doi.org/10.3390/app122312346

AMA Style

Hu H, Song A. Haptic Texture Rendering of 2D Image Based on Adaptive Fractional Differential Method. Applied Sciences. 2022; 12(23):12346. https://doi.org/10.3390/app122312346

Chicago/Turabian Style

Hu, Huiran, and Aiguo Song. 2022. "Haptic Texture Rendering of 2D Image Based on Adaptive Fractional Differential Method" Applied Sciences 12, no. 23: 12346. https://doi.org/10.3390/app122312346

APA Style

Hu, H., & Song, A. (2022). Haptic Texture Rendering of 2D Image Based on Adaptive Fractional Differential Method. Applied Sciences, 12(23), 12346. https://doi.org/10.3390/app122312346

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop