Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones †
Abstract
:1. Introduction
1.1. Motivations
- People are operating smartphones with the right (or left) hand, but the buttons they want to press are on the top left (or right) corner. Since smartphone screens are large, they have to try their best to reach these buttons. They will be forced to use both hands if these buttons are too far away, because almost all of the user interfaces today are fixed, which is inconvenient when the other hand is busy doing other things, such as eating, carrying heavy loads, holding the handle of a metro and driving a car.
- People frequently use the input method or the dialing keyboard to interact with smartphones, and these two approaches usually occupy the entire width of the smartphone screen nowadays. Therefore, they have to use both hands to input if the smartphone screens are large. User experience will be greatly improved if the smartphone can dynamically detect the current operating hand and then shrink the area of the input method or the dialing keyboard proportionally, as well as automatically let them gather on the side of this hand; because users can complete the whole input process with a single hand, even if the smartphone screens are very large.
- There are numerous kinds of buttons on the smartphone user interface, and some of them are sensitive or even destructive; for example, the send button for SMS, the dislike button for social software, such as Facebook, and the delete button of the photo album (e.g., in the Android 5.1.1 Operating System). A series of serious consequences may be caused if the user accidentally touches these buttons and is completely unaware of that. Therefore, these buttons should be placed at specific positions (the red area in Figure 1) of the smartphone user interface that are4 closely related to the current operating hand. Then, the users need to make some effort to reach these buttons if they really want to, so a casual touching event will never happen.
- There are three buttons at the bottom of most Android-based smartphones today, which are the back button, the home button and the menu button, respectively. The use of the back button is more frequent than the menu button for almost all users in our investigation. However, as far as we know, the positions of these three buttons on the Android-based smartphone user interface are all fixed nowadays. One more reasonable approach is putting the back button at the position that is easiest to touch among the three positions according to the current operating hand.
2. Recognition of the Current Operating Hand from Touchscreen Traces
2.1. System Overview
2.2. Utilizing Pattern Recognition Algorithms for Recognition
2.2.1. Computation of the Feature Vector
- Length features. Total length.
- Velocity features. Maximum and average velocity. Standard deviation of velocity. The changing process of the velocity on a trace contains two phases: acceleration and deceleration. The durations of acceleration processes are different between traces generated by index fingers and thumbs because of the different lengths of the traces. The deceleration processes of traces generated by index fingers are shorter because these traces often slide out of the touchscreens directly without deceleration. Therefore, the velocity features of traces generated by index fingers and thumbs are different. Figure 4 plots the change of the velocity magnitude with displacement on traces performed by different fingers of the same participant. The directions of traces are left in Figure 4a and up in Figure 4b. We can observe that, for this participant, velocity features can be utilized to distinguish traces generated by different fingers.
- Displacement features.Total and maximum X-displacement.Total and maximum Y-displacement.
- Shape features. In order to describe the shape features of traces quantitatively and accurately, we carry out curve fitting on discrete sampling points of every trace using a quartic polynomial, which is a tradeoff among fitting precision, computational complexity and the degree of distinction on traces generated by different fingers.Root mean squared error. This feature measures the smooth degree of traces. Index fingers are more flexible and have larger touching ranges than thumbs. When operating smartphones, index fingers exert less pressure on touchscreens and receive less friction resistance, so traces generated by index fingers are always smoother than those generated by thumbs. The root mean squared error of trace is calculated as:Maximum and average curvature. These two features measure the curvature degree of traces. Users slide on touchscreens exploiting the most effortless approach unconsciously. When users operate smartphones with thumbs, restricted to the limited touching ranges, most of the traces generated are curves whose centers are on the same side as the operating hand. The touching ranges of index fingers are significantly larger, which produces straighter traces. The curvature at the j-th sampling point of trace is calculated as:Curve convex orientation. This feature measures the curve’s convex orientation, which can be very useful in distinguishing traces generated by the left thumb and the right thumb. To calculate the curve convex orientation of trace , first, we randomly choose a sampling point that is close to the middle of the trace. Second, we construct two vectors, which are from this sampling point to the first and the last sampling point, respectively. Then, we calculate the cross product between these two vectors.
2.3. Evaluation
2.3.1. Recognition Performance of Pattern Recognition Algorithms
Computation Time of the Feature Vector
Multi-Class Classification Algorithms
Impact of the Number of Training Samples
Impact of the Number of Trees
Evaluation
3. Recognition of the Hand-Changing Process from the Accelerometer and Gyroscope Data
3.1. System Overview
3.2. Activities of Daily Life
3.3. Segmentation Algorithm
End-Points Detection
3.4. Utilizing Pattern Recognition Algorithms for Recognition
3.4.1. Computation of the Feature Vector
- Time domain features. Time domain features focus on intuitive waveform characteristics, which can be obtained from data segments directly, so very small computational complexity and storage memory are required.Max, Mean, Min and range. These features describe the basic shape of accelerometer and gyroscope data segments and have been extensively exploited in various works, especially threshold-based algorithms.
- Statistics features. Statistics features capture the distribution characteristics of consecutive accelerometer and gyroscope sampling points.Kurtosis. This feature weighs how the amplitude decays near the extreme points, namely the peakedness and flatness. Larger kurtosis values indicate a more peaked distribution. The kurtosis of the accelerometer or gyroscope segment is calculated as:
- Frequency domain features. Frequency domain features pay attention to the periodic nature. We transform the time series of accelerometer or gyroscope data into spectrum employing FFT (Fast Fourier Transform) in this paper. Centroid. This feature characterizes the barycenter of the frequency spectrum. The centroid of the accelerometer or gyroscope segment is calculated as:DC amplitude. This feature denotes the amplitude of the DC component.Decrease. This feature weighs the decreasing degree of the frequency spectrum curve. The Decrease of the accelerometer or gyroscope segment is calculated as:Flux. This feature describes the stability of the frequency spectrum curve, in other words, how often the frequency spectrum curve of the accelerometer or gyroscope signal changes. The flux of the accelerometer or gyroscope segment is formally defined as:We choose the L2-norm, namely the Euclidean distance and the unnormalized spectrum when computing flux in this paper.Peak amplitude. This feature measures the max amplitude in the frequency spectrum. The peak amplitude of the accelerometer or gyroscope segment is calculated as:Roll-off. This feature captures the frequency below which 75% of the total amplitudes of all frequency components is contained. The roll-off of accelerometer or gyroscope segment is calculated as:Spread. This feature denotes the shape of the frequency spectrum, that is to say whether it is concentrated in the vicinity of its centroid or spread out over the frequency spectrum. The spread of the accelerometer or gyroscope segment is calculated as:
3.4.2. Dimension Reduction of the Feature Vector
3.5. Utilizing Dynamic Time Warping for Recognition
Preliminaries about DTW
Algorithm 1: Calculate DTW distance set. |
1: T: template library containing template segments. |
2: Input: |
3: s: segment for recognition, whose class is unknown. |
4: Output: |
5: D: DTW distance set containing DTW distances between s and each template segment. |
6: for each template segment in T do |
7: = dynamic time warping (, s); |
8: add into D; |
9: end for |
Algorithm 2: Dynamic time warping. |
1: Input: |
2: : template segment, with sampling points to . |
3: s: segment for recognition, whose class is unknown, with sampling points to . |
4: Output: |
5: d: DTW distance between s and . |
6: normalize the amplitude of each sampling point in s into [-1, 1]; |
7: normalize the amplitude of each sampling point in into [-1, 1]; |
8: = 0; |
9: for i = 1 to do |
10: = +∞; |
11: end for |
12: for j = 1 to do |
13: = +∞; |
14: end for |
15: for i = 1 to do |
16: for j = 1 to do |
17: = ; |
18: = + min, , ; |
19: end for |
20: end for |
21: d = ; |
3.6. Evaluation
3.6.1. Recognition Performance of Pattern Recognition Algorithms
Experimental Dataset
Multi-Class Recognition Algorithms
Dimension Reduction Algorithms
Time Consumption of Every Computational Step
3.6.2. Recognition Performance of DTW
Experimental Dataset
Experimental Results
Time Consumption of DTW
4. Related Work
4.1. Touch Behavioral Biometrics-Based Authentication
4.2. Human Activity Recognition with Inertial Sensors
4.3. Smartphone User Interface Adjustment
5. Discussion
5.1. Smartphone User Interface Adjustment
5.2. Energy Consumption Analysis
5.2.1. Recognition of the Current Operating Hand
5.2.2. Recognition of the Hand-Changing Process
5.2.3. Another Related Paper
6. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
Abbreviations
TPR | True Positive Rate |
FPR | False Positive Rate |
DT | Decision Tree |
RF | Random Forest |
NB | Naive Bayes |
MLP | Multi-Layer Perceptron |
k-NN | k-Nearest Neighbors |
LDA | Linear Discriminant Analysis |
PCA | Principal Component Analysis |
DTW | Dynamic Time Warping |
References
- iPhone. Available online: https://www.apple.com (accessed on 16 June 2016).
- Nokia. Available online: http://www.microsoftstore.com (accessed on 16 June 2016).
- Samsung. Available online: http://www.samsung.com (accessed on 16 June 2016).
- Guo, H.; Huang, H.; Sun, Z.; Huang, L.; Zhu, Z.; Wang, S.; Wang, P.; Xu, H.; Liu, H. Recognizing the Operating Hand from Touchscreen Traces on Smartphones. In Knowledge Science, Engineering and Management; Springer: Berlin, Germany, 2015; pp. 199–211. [Google Scholar]
- Wu, X.; Kumar, V.; Quinlan, J.R.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Philip, S.Y.; et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. 2008, 14, 1–37. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer: New York, NY, USA, 2001. [Google Scholar]
- Safavian, S.R.; Landgrebe, D. A Survey of Decision Tree Classifier Methodology; NASA Technical Reports Server: Washington, DC, USA, 1990. [Google Scholar]
- Freund, Y.; Mason, L. The alternating decision tree learning algorithm. In Sixteenth International Conference on Machine Learning; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1999; Volume 99, pp. 124–133. [Google Scholar]
- Liaw, A.; Wiener, M. Classification and regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Rish, I. An empirical study of the naive Bayes classifier. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI), Seattle, WA, USA, 4–10 August 2001; pp. 41–46.
- Zhang, H. The optimality of naive Bayes. In Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference, Miami Beach, FL, USA, 17–19 May 2004; Volume 1, p. 3.
- Pal, S.K.; Mitra, S. Multilayer perceptron, fuzzy sets, and classification. IEEE Trans. Neural Netw. 1992, 3, 683–697. [Google Scholar] [CrossRef] [PubMed]
- Ruck, D.W.; Rogers, S.K.; Kabrisky, M. Feature selection using a multilayer perceptron. J. Neural Netw. Comput. 1990, 2, 40–48. [Google Scholar]
- Laaksonen, J.; Oja, E. Classification with learning k-nearest neighbors. In Proceedings of the IEEE International Conference on Neural Networks, Washington, DC, USA, 3–6 June 1996; Volume 3, pp. 1480–1483.
- Pan, J.-S.; Qiao, Y.-L.; Sun, S.-H. A fast K nearest neighbors classification algorithm. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2004, 87, 961–963. [Google Scholar]
- Ravi, N.; Dandekar, N.; Mysore, P.; Littman, M.L. Activity recognition from accelerometer data. In Proceedings of the 17th Conference on Innovative Applications of Artificial Intelligence (AAAI), Pittsburgh, PA, USA, 9–13 July 2005; Volume 5, pp. 1541–1546.
- Bao, L.; Intille, S.S. Activity recognition from user-annotated acceleration data. In Pervasive Computing; Springer: Berlin, Germany; Heidelberg, Germany, 2004; pp. 1–17. [Google Scholar]
- Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM SIGKDD Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
- Wu, W.; Dasgupta, S.; Ramirez, E.E.; Peterson, C.; Norman, G.J. Classification accuracies of physical activities using smartphone motion sensors. J. Med. Internet Res. 2012, 14, e130. [Google Scholar] [CrossRef] [PubMed]
- Gyorbíró, N.; Fábián, Á.; Hományi, G. An activity recognition system for mobile phones. Mob. Netw. Appl. 2009, 14, 82–91. [Google Scholar] [CrossRef]
- Eren, H.; Makinist, S.; Akin, E.; Yilmaz, A. Estimating driving behavior by a smartphone. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Alcala de Henares, Spain, 3–7 June 2012; pp. 234–239.
- Gu, T.; Chen, S.; Tao, X.; Lu, J. An unsupervised approach to activity recognition and segmentation based on object-use fingerprints. Data Knowl. Eng. 2010, 69, 533–544. [Google Scholar] [CrossRef]
- Balakrishnama, S.; Ganapathiraju, A. Linear Discriminant Analysis—A Brief Tutorial; Institute for Signal and Information Processing: Lodon, UK, 1998; Volume 18. [Google Scholar]
- Haeb-Umbach, R.; Ney, H. Linear discriminant analysis for improved large vocabulary continuous speech recognition. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-92), San Francisco, CA, USA, 23–26 March 1992; Volume 1, pp. 13–16.
- Ye, J.; Janardan, R.; Li, Q. Two-dimensional linear discriminant analysis. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 13–18 December 2004; pp. 1569–1576.
- Jolliffe, I. Principal Component Analysis; Wiley Online Library: Hoboken, NJ, USA, 2002. [Google Scholar]
- Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
- Abdi, H.; Williams, L.J. Principal component analysis. In Wiley Interdisciplinary Reviews: Computational Statistics; Wiley Online Library: Hoboken, NJ, USA, 2010; Volume 2, pp. 433–459. [Google Scholar]
- Müller, M. Dynamic time warping. In Information Retrieval for Music and Motion; Springer-Verlag: Berlin, Germany; Heidelberg, Germany, 2007; pp. 69–84. [Google Scholar]
- Keogh, E.; Ratanamahatana, C.A. Exact indexing of dynamic time warping. Knowl. Inf. Syst. 2005, 7, 358–386. [Google Scholar] [CrossRef]
- Berndt, D.J.; Clifford, J. Using Dynamic Time Warping to Find Patterns in Time Series. In Proceedings of the Knowledge Discovery in Databases (KDD), Seattle, WA, USA, 25 July 1994; Volume 10, pp. 359–370.
- Zheng, Y.; Jeon, B.; Xu, D.; Wu, Q.; Zhang, H. Image segmentation by generalized hierarchical fuzzy C-means algorithm. J. Intell. Fuzzy Syst. 2015, 28, 961–973. [Google Scholar]
- Wen, X.; Shao, L.; Xue, Y.; Fang, W. A rapid learning algorithm for vehicle classification. Inf. Sci. 2015, 295, 395–406. [Google Scholar] [CrossRef]
- Gu, B.; Sheng, V.S.; Tay, K.Y.; Romano, W.; Li, S. Incremental support vector learning for ordinal regression. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 1403–1416. [Google Scholar] [CrossRef] [PubMed]
- Gu, B.; Sun, X.; Sheng, V.S. Structural minimax probability machine. IEEE Trans. Neural Netw. Learn. Syst. 2016, 1, 1–11. [Google Scholar] [CrossRef] [PubMed]
- Gu, B.; Sheng, V.S.; Wang, Z.; Ho, D.; Osman, S.; Li, S. Incremental learning for ν-support vector regression. Neural Netw. 2015, 67, 140–150. [Google Scholar] [CrossRef] [PubMed]
- Gu, B.; Sheng, V.S. A robust regularization path algorithm for ν-support vector classification. IEEE Trans. Neural Netw. Learn. Syst. 2016, 1, 1–8. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Li, X.; Yang, B.; Sun, X. Segmentation-based image copy-move forgery detection scheme. IEEE Trans. Inf. Forensics Secur. 2015, 10, 507–518. [Google Scholar]
- Xia, Z.; Wang, X.; Sun, X.; Wang, B. Steganalysis of least significant bit matching using multi-order differences. Secur. Commun. Netw. 2014, 7, 1283–1291. [Google Scholar] [CrossRef]
- Sae-Bae, N.; Ahmed, K.; Isbister, K.; Memon, N. Biometric-rich gestures: A novel approach to authentication on multi-touch devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), Austin, TX, USA, 5–10 May 2012; pp. 977–986.
- Shahzad, M.; Liu, A.X.; Samuel, A. Secure unlocking of mobile touch screen devices by simple gestures: you can see it but you can not do it. In Proceedings of the 19th Annual International Conference on Mobile Computing and Networking, Miami, FL, USA, 30 September–4 October 2013; pp. 39–50.
- De Luca, A.; Hang, A.; Brudy, F.; Lindner, C.; Hussmann, H. Touch me once and I know it’s you!: Implicit authentication based on touch screen patterns. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), Austin, TX, USA, 5–10 May 2012; pp. 987–996.
- Seo, H.; Kim, E.; Kim, H.K. A novel biometric identification based on a users input pattern analysis for intelligent mobile devices. Int. J. Adv. Robot. Syst. 2012, 9. [Google Scholar] [CrossRef]
- Vu, T.; Baid, A.; Gao, S.; Gruteser, M.; Howard, R.; Lindqvist, J.; Spasojevic, P.; Walling, J. Distinguishing users with capacitive touch communication. In Proceedings of the 18th Annual International Conference on Mobile Computing and Networking (Mobicom), Istanbul, Turkey, 22–26 August 2012; pp. 197–208.
- Goel, M.; Wobbrock, J.; Patel, S. GripSense: Using built-in sensors to detect hand posture and pressure on commodity mobile phones. In Proceedings of the 25th ACM Uist Symposium (UIST), Cambridge, MA, USA, 7–10 October 2012; pp. 545–554.
- Frank, M.; Biedert, R.; Ma, E.; Martinovic, I.; Song, D. Touchalytics: On the applicability of touchscreen input as a behavioral biometric for continuous authentication. IEEE Trans. Inf. Forensics Secur. 2013, 8, 136–148. [Google Scholar] [CrossRef]
- Bo, C.; Zhang, L.; Li, X.Y.; Huang, Q.; Wang, Y. Silentsense: Silent user identification via touch and movement behavioral biometrics. In Proceedings of the 19th Annual International Conference on Mobile Computing Networking, Miami, FL, USA, 30 September–4 October 2013; pp. 187–190.
- Zheng, N.; Bai, K.; Huang, H.; Wang, H. You are how you touch: User verification on smartphones via tapping behaviors. In Proceedings of the IEEE 22nd International Conference on Network Protocols (ICNP), Raleigh, NC, USA, 21–24 October 2014; pp. 221–232.
- Shi, W.; Yang, J.; Jiang, Y.; Yang, F.; Xiong, Y. Senguard: Passive user identification on smartphones using multiple sensors. In Proceedings of the IEEE 7th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Wuhan, China, 10–12 October 2011; pp. 141–148.
- Feng, T.; Liu, Z.; Kwon, K.A.; Shi, W.; Carbunar, B.; Jiang, Y.; Nguyen, N. Continuous mobile authentication using touchscreen gestures. In Proceedings of the IEEE Conference on Technologies for Homeland Security (HST), Waltham, MA, USA, 13–15 November 2012; pp. 451–456.
- Miluzzo, E.; Varshavsky, A.; Balakrishnan, S.; Choudhury, R.R. Tapprints: Your finger taps have fingerprints. In Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services (MobiSys), Lake District, UK, 25–29 June 2012; pp. 323–336.
- Xu, Z.; Bai, K.; Zhu, S. Taplogger: Inferring user inputs on smartphone touchscreens using on-board motion sensors. In Proceedings of the Fifth ACM Conference on Security and Privacy in Wireless and Mobile Networks (WISEC), Tucson, AZ, USA, 16–18 April 2012; pp. 113–124.
- Abbate, S.; Avvenuti, M.; Bonatesta, F.; Cola, G.; Corsini, P.; Vecchio, A. A smartphone-based fall detection system. Pervasive Mob. Comput. 2012, 8, 883–899. [Google Scholar] [CrossRef]
- Bo, C.; Jung, T.; Mao, X.; Li, X.Y.; Wang, Y. SmartLoc: Sensing landmarks silently for smartphone-based metropolitan localization. EURASIP J. Wirel. Commun. Netw. 2016, 2016, 1–17. [Google Scholar] [CrossRef]
- Dai, J.; Teng, J.; Bai, X.; Shen, Z.; Xuan, D. Mobile phone based drunk driving detection. In Proceedings of the 4th International Conference on Pervasive Computing Technologies for Healthcare, Munich, Germany, 22–25 March 2010; pp. 1–8.
- Bhoraskar, R.; Vankadhara, N.; Raman, B.; Kulkarni, P. Wolverine: Traffic and road condition estimation using smartphone sensors. In Proceedings of the Fourth International Conference on Communication Systems and Networks (COMSNETS), Bangalore, India, 3–7 January 2012; pp. 1–6.
- Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Parate, A.; Chiu, M.C.; Chadowitz, C.; Ganesan, D.; Kalogerakis, E. Risq: Recognizing smoking gestures with inertial sensors on a wristband. In Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services, Lake District, UK, 25–29 June 2014; pp. 149–161.
- Ladha, C.; Hammerla, N.Y.; Olivier, P.; Plötz, T. ClimbAX: Skill assessment for climbing enthusiasts. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing, Zurich, Switzerland, 8–12 September 2013; pp. 235–244.
- Tapia, E.M.; Intille, S.S.; Haskell, W.; Larson, K.; Wright, J.; King, A.; Friedman, R. Real-time recognition of physical activities and their intensities using wireless accelerometers and a heart rate monitor. In Proceedings of the 11th IEEE International Symposium on Wearable Computers, Boston, MA, USA, 11–13 October 2007; pp. 37–40.
- Bo, C.; Jian, X.; Li, X.Y.; Mao, X.; Wang, Y.; Li, F. You’re driving and texting: detecting drivers using personal smart phones by leveraging inertial sensors. In Proceedings of the 19th Annual International Conference on Mobile Computing Networking, Miami, FL, USA, 30 September–4 October 2013; pp. 199–202.
- Song, H.; Liu, H.; Chen, D. An automatic gui adjustment method for mobile computing. In Proceedings of the IEEE International Conference on Computer Science and Automation Engineering (CSAE), Shanghai, China, 10–12 June 2011; Volume 3, pp. 206–210.
- Hu, P.; Shen, G.; Li, L.; Lu, D. ViRi: View it right. In Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services, Taipei, Taiwan, 25–28 June 2013; pp. 277–290.
- He, S.; Liu, Y.; Zhou, H. Optimizing smartphone power consumption through dynamic resolution scaling. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, Paris, France, 7–11 September 2015; pp. 27–39.
- Komine, S.; Nakanishi, M. Optimization of GUI on Touchscreen Smartphones Based on Physiological Evaluation—Feasibility of Small Button Size and Spacing for Graphical Objects. In Human Interface and the Management of Information; Springer: Berlin, Germany; Heidelberg, Germany, 2013; pp. 80–88. [Google Scholar]
- Alonso-Ríos, D.; Raneburger, D.; Popp, R.; Kaindl, H.; Falb, J. A user study on tailoring GUIs for smartphones. In Proceedings of the 29th Annual ACM Symposium on Applied Computing, Gyeongju, Korea, 24–28 March 2014; pp. 186–192.
- Smartisan. Available online: http://www.smartisan.com (accessed on 16 June 2016).
- Baidu Input Method. Available online: http://srf.baidu.com (accessed on 16 June 2016).
- Rawassizadeh, R.; Tomitsch, M.; Nourizadeh, M.; Momeni, E.; Peery, A.; Ulanova, L.; Pazzani, M. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches. Sensors 2015, 15, 22616–22645. [Google Scholar] [CrossRef] [PubMed]
- Ben Abdesslem, F.; Phillips, A.; Henderson, T. Less is more: Energy-efficient mobile sensing with senseless. In Proceedings of the 1st ACM Workshop on Networking, Systems, and Applications for Mobile Handhelds, Barcelona, Spain, 16–21 August 2009; pp. 61–62.
- König, I.; Memon, A.Q.; David, K. Energy consumption of the sensors of Smartphones. In Proceedings of the Tenth International Symposium on Wireless Communication Systems (ISWCS 2013), Ilmenau, Germany, 27–30 August 2013; pp. 1–5.
- Hao, T.; Xing, G.; Zhou, G. iSleep: Unobtrusive sleep quality monitoring using smartphones. In Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems, Roma, Italy, 11–15 November 2013; p. 4.
Length Features | |
1 | Total Length |
Velocity Features | |
3 | Maximum and Average Velocity, Standard Deviation of Velocity |
Displacement Features | |
4 | Total and Maximum X-Displacement, Total and Maximum Y-Displacement |
Shape Features | |
4 | Root Mean Squared Error, Maximum and Average Curvature, Curve Convex Orientation |
Features | Time | |
---|---|---|
Length | Total | (0.046 ± 0.011) ms |
Velocity | Maximum | (0.088 ± 0.040) ms |
Average | ||
Standard Deviation | ||
Displacement | Total X | (0.037 ± 0.012) ms |
Maximum X | ||
Total Y | ||
Maximum Y | ||
Shape | RMSE | (3.834 ± 0.367) ms |
Maximum Curvature | (13.834 ± 1.005) ms | |
Average Curvature | ||
CCO | (0.007 ± 0.002) ms | |
Total | (16.867 ±1.553) ms |
Precision | Recall | F1 | AUC | |
---|---|---|---|---|
DT | 92.2% | 91.8% | 91.8% | 0.975 |
RF | 95.6% | 95.6% | 95.6% | 0.990 |
NB | 82.4% | 71.5% | 72.8% | 0.919 |
MLP | 96.9% | 96.8% | 96.8% | 0.999 |
k-NN | 88.1% | 88.2% | 88.2% | 0.951 |
Accelerometer Data-Based Features | ||
---|---|---|
Time | 4 | All-Max, All-Mean, All-Min, All-Range |
3 | AbsX-Max, AbsX-Mean, X-Range | |
5 | AbsY-Mean, AbsY-Min, Y-Max, Y-Mean, Y-Min | |
5 | AbsZ-Max, AbsZ-Mean, Z-Max, Z-Mean, Z-Range | |
Statistics | 2 | X-Kurtosis, Y-Kurtosis |
Frequency | 2 | All-DC Amplitude, All-Peak Amplitude |
3 | X-Centroid, X-Decrease, X-Roll-Off | |
7 | Y-Centroid, Y-DC Amplitude, Y-Decrease | |
Y-Flux, Y-Peak Amplitude, Y-Roll-Off, Y-Spread | ||
5 | Z-DC Amplitude, Z-Decrease, Z-Flux, Z-Peak Amplitude, Z-Spread |
Gyroscope Data-Based Features | ||
---|---|---|
Time | 2 | All-Max, All-Range |
3 | AbsY-Max, Y-Max, Y-Range | |
1 | Z-Mean | |
Statistics | 2 | X-Kurtosis, Y-Kurtosis |
Frequency | 1 | All-Peak Amplitude |
4 | X-Centroid, X-Decrease, X-Roll-Off, X-Spread | |
3 | Y-Centroid, Y-Roll-Off, Y-Spread | |
2 | Z-DC Amplitude, Z-Decrease |
Activities | Total |
---|---|
Left Hand to Right Hand | 500 times |
Right Hand to Left Hand | 500 times |
Activities of Daily Life | 7 days |
(Walk, Run, Go Upstairs and so on) |
Step | Computational Process | Time | |
---|---|---|---|
1–2 | Acquisition of Raw | (2.33 ± 0.09) ms | |
Accelerometer and Gyroscope Data | |||
Segmentation of | |||
Raw Data into Segments | |||
3 | Segment-Level Feature Extraction | (4.53 ± 0.04) ms | |
4 | Dimension Reduction | Train | ∼0 ms |
(LDA) | Test | ||
6 | Recognition Algorithm | Train | ∼0 ms |
(RF) | Test | ||
Total | (6.86 ± 0.11) ms |
Category | Activities | Total |
---|---|---|
Template Library | Left Hand to Right Hand | 20 times |
Right Hand to Left Hand | 20 times | |
Test Set | Left Hand to Right Hand | 500 times |
Right Hand to Left Hand | 500 times | |
Activities of Daily Lives | 4 days | |
(Walk, Run, Go Upstairs and so on) |
Step | Computational Process | Time |
---|---|---|
5 | Segment-Level Feature Extraction (DTW) | (233.51 ± 0.61) ms |
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, H.; Huang, H.; Huang, L.; Sun, Y.-E. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones. Sensors 2016, 16, 1314. https://doi.org/10.3390/s16081314
Guo H, Huang H, Huang L, Sun Y-E. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones. Sensors. 2016; 16(8):1314. https://doi.org/10.3390/s16081314
Chicago/Turabian StyleGuo, Hansong, He Huang, Liusheng Huang, and Yu-E Sun. 2016. "Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones" Sensors 16, no. 8: 1314. https://doi.org/10.3390/s16081314