Next Article in Journal
Recognition of Traffic Sign Based on Bag-of-Words and Artificial Neural Network
Next Article in Special Issue
Precise Positioning of Marketing and Behavior Intentions of Location-Based Mobile Commerce in the Internet of Things
Previous Article in Journal
Group Decision-Making for Hesitant Fuzzy Sets Based on Characteristic Objects Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Fusion of an Ultrasonic and Spatially Aware System in a Mobile-Interaction Device

1
Department of Computer Science, North Dakota State University, 1340 Administration Ave, Fargo, ND 58102, USA
2
School of Computer Sciences, Western Illinois University, 1 University Cir, Macomb, IL 61455, USA
*
Author to whom correspondence should be addressed.
Symmetry 2017, 9(8), 137; https://doi.org/10.3390/sym9080137
Submission received: 4 June 2017 / Revised: 25 July 2017 / Accepted: 26 July 2017 / Published: 30 July 2017
(This article belongs to the Special Issue Information Technology and Its Applications)

Abstract

:
Over the past four decades, the prophecy from computer pundits and prognosticators pointed to the looming arrival of the paperless office era. However, forty years later, physical paper documents are still playing a significant role due to the ease of use, superior readability, and availability. The drawbacks of paper sheets are that they are hard to modify and retrieve, have limited space, and are environmentally unfriendly. Augmenting paper documents with digital information from mobile devices extends the two-dimensional space of physical paper documents. Various camera-based recognition and detection devices have been proposed to augment paper documents with digital information. However, there are still some limitations that exist in these systems. This paper presents a novel, low cost, spatially aware, mobile system called Ultrasonic PhoneLens. The Ultrasonic PhoneLens adopts two-dimensional dynamic image presentation and ultrasonic sound positioning techniques. It consists of two ultrasonic sound sensors, one Arduino mini-controller board, and one android mobile device. Based on the location of the mobile device over the physical paper, Ultrasonic PhoneLens can retrieve pre-saved digital information from a mobile database for the object (such as a text, a paragraph, or an image) in a paper document. An empirical study was conducted to evaluate the system performance. The results indicate that our system has a better performance in tasks such as browsing multivalent documents and sharing digital information than the Wiimote PhoneLens system.

1. Introduction

With the rapid development of computer technology, more and more information are used electronically and saved in digital formats. It is much easier to collect, store, transfer, and retrieve information in the digital world. However, physical paper documents still play a significant role in our daily life due to the ease of use, superior readability, and availability. Some people still prefer to read a printed document in many workplaces. When people read a paper document, they may need to know more information regarding a particular object, such as a text, a paragraph, or an image. Such information might be available in the digital format. Integrating information from paper documents and electronic devices allows readers to easily access information not included in a traditional paper document. To achieve this, paper-based augmented reality which overlays digital information on traditional paper is needed.
Augmented reality technique is the integration of digital information with physical real world environments. Using augmented reality, elements in the real world are augmented by computer-generated information such as sound, video, or digital images. For instance, when a user looks at a restaurant on a paper map, with the aid of augmented reality, he/she can also read reviews, the menus of the restaurant that are available digitally. To make connections between the object on a physical paper and its related digital information, we need spatial awareness techniques. Spatial awareness is the ability to make an object aware of itself in a real-world environment. Developing spatial awareness is to identify the location of an object in relation to its own body in space. Therefore, location information is vital to spatial awareness.
The development of spatial awareness and augmented reality has shifted from traditional computers to mobile devices. Mobile devices are getting more computational power and are becoming more popular. Different methods have been applied to support spatially aware, mobile interactions, such as marker-based methods [1,2,3] and content-recognition based solutions [4,5,6]. Marker-based approaches convert physical information to digital content, while content-recognition based techniques require high computing power to identify objects in real environments. Marker-based systems can work with a large paper sheet without using large hardware. This methodology, however, has a high requirement of computer capability and power. It is hard to provide dynamic and simultaneous results due to the complexity of the calculation. Reilly et al. [7] developed a marker-based mobile system to combine paper maps with electronic information. This system uses a high performance, compact radio frequency identification (RFID) reader to recognize the RFID tags on the paper maps. The problem with this approach is that the user can only obtain static electronic information from each tag on a map. Content-recognition based techniques use cameras to identify objects in real environments also requires high computing capabilities and are not suitable for smartphones.
In an augmented paper system, a user can move a spatial-aware mobile device over a paper document and acquire the digital information based on the position of the user’s focus. However, the mobile device has no ability to directly access a physical paper document. Spatially aware mobile systems must rely on other hardware to achieve the function of recognizing a user’s focus when browsing a paper document. Lee [8] uses the Wiimote to create Multi-point Interactive Whiteboards. His invention enhanced the information in a paper document with the tablet display, however, the system reduces the portability due to the large external Wiimote hardware.
To address some of the issues mentioned above, we design a new spatial aware system called the Ultrasonic PhoneLens. It is a low-cost, portable, and high-performance solution for spatial aware systems. The major functional requirement of the system is to provide a means for users to acquire digital information regarding the texts or images in a paper based on the location of the device. Nonfunctional requirements include usability, real-time performance, portability, and accuracy. We evaluated these requirements in the empirical evaluation in Section 4, such as comfort level when browsing a paper document, and precise detection of real position.
Ultrasonic PhoneLens integrates two ultrasonic sound sensors, Arduino board, and a mobile device. The ultrasonic sound sensors assembled on an Arduino board provide reliable distance detection in the paper-based working environment (by sending sound waves to the wood barriers set up at the borders of the paper). Arduino transmits the distance as coordinates (i.e., the location over the paper) to a mobile device through Bluetooth. Based on the coordinates detected by the ultrasonic sound sensor, digital information corresponding to the text or the image in the paper will be displayed on the screen of mobile device.
The major contribution of this paper is that we design a low-cost, mobile-based, portable augmented paper system with spatial awareness capability. The movement-based interface in Ultrasonic PhoneLens allows the user to acquire digital information related to a paper sheet through hand movements. Ultrasonic PhoneLens combines hand movements with the traditional screen touch interaction on a mobile device. The traditional screen touch devices can only allow users to operate the system within the scope of the screen. The movement of the handheld mobile device over a paper document can enhance the system usability without the limitation of small-screen devices. To our knowledge, it is the first design of a touch-based mixed-media approach that utilizes ultrasonic technologies and smartphones. We also designed a multi-step data processing method to improve data accuracy.
To evaluate the performance of Ultrasonic PhoneLens, we conducted an empirical study on browsing a paper-based architecture plan. An architecture plan for a room includes multiple layers of information such as lighting layout, electrical layout, etc. Such information are overlapped and cannot be represented in a single paper sheet. With the help of a mobile spatial awareness system, the user can easily locate information about objects, such as circuit breakers, on the plan. We compared Ultrasonic PhoneLens with Wiimote PhoneLens [9], the first generation of the system, which used a Wii remote and Infrared Light-Emitting Diode (LED) lights to augment a paper-based workplace with digital information.
The rest of the paper is organized as follows. Section 2 reviews related work. Details of our system are explained in Section 3. Section 4 describes the empirical study. Results are analyzed in Section 5. Section 6 discusses our future work.

2. Related Work

Numerous studies have been proposed to augment the paper document with electronic devices. Our study is related to areas of interactive paper systems, spatial aware computing, and augmented reality techniques.
  • Paper systems: Many interactive paper systems have been developed to combine the benefits of paper and digital media. The Anoto technology [10] is a pen-based interaction system which can track handwriting on a physical paper and augment paper documents with digital information. Liao [11] designed a pen-based command system for interactive paper. He proposed pen-top multimodal feedback which combines visual, tactile, and auditory feedback to support paper-computer interaction. Hotpaper [12] aims at augmenting paper information with multimedia annotations (such as video or audio). The system can analyze the physical information which is a captured document patch image or video frame to identify the corresponding digital information such as electronic document, page number, and location on the page. Paper Composer [13] is an interactive paper interface for music composition. This system supports composers’ expressions and explorations in a music book by computer-aided composition tools. S-Notebook [14] is a mobile application that connects mobile devices with interactive paper using an Anoto pen. It allows users to add annotations or drawings to anchors in digital images without learning pre-defined pen gestures and commands. These paper systems combine traditional paper with digital information. Most of the approaches, however, utilized computers or special pens. Our approach improve the portability of interactive paper system using touch-based mobile devices.
  • Spatial aware computing: Spatial aware computing had been applied in interactive paper systems. MouseLight [15] is one of the examples. A spatially aware projector is made with a mobile laser projector. It can detect the position of the digital pen and track the handwriting from the end user. This application, however, is a bimanual hardware. It is very hard for users to write and operate the system at the same time. In our paper, we are presenting a new spatial aware interactive system which can be operated by a single-handed device.
  • Augmented reality: Camera-based approach: Currently, camera-based augmented reality technique has been widely applied in the field of digital images and traditional paper document interaction. The SESIL [16], an augmented reality environment for students' improved Learning, is a novel approach to setting up a digital environment to perform the recognition of physical book pages and of specific elements of interest within pages. The pages in books can be captured by a camera. The system can recognize images from the camera and produce an electronic page which can produces an interaction with actual books and pens/pencils. Jee et al. [17] designed an electronic learning system which can allow users to read 3D virtual content from traditional textbooks. This system creates a 3D modeling environment based on the content of physical book. To improve the portability, integrated cameras in handheld devices have been adopted in the field of camera-based augmented reality. Hansen et al. [18] used integrated cameras in mobile devices to address how mixed interaction spaces can have different identities, can be mapped to applications, and can be visualized. By applying image analysis algorithms to the camera pictures, movements or actions such as rotation and marking, can be determined. These camera based approaches require high computing power to identify objects in a real environment.
Marker-based approach: A primary challenge of augmented reality is how to align digital information with the real world. To address this, a marker-based approach using visual markers are proposed. In 1998, Masutani et al. [19] constructed an augmented, reality-based visualization system to support intravascular neurosurgery and evaluate it in clinical environments. It augments the motion pictures from X-ray fluoroscopy with 3D virtual vascular models. This technique relies on the 3D registration fiducial markers. The data adaptive reprojection technique was introduced to evaluate the reliability of the displayed fluoroscopy. It predicts the number of wrong registrations around the registered objects. The results were compiled using synthetic data consisting of fiducial marker coordinates with 2-Dimensional (2D) or 3-Dimensional (3D) errors. It was perhaps the earliest software that utilized a marker-based approach for augmented reality. As smartphone technology has exploded, marker-based augmented reality has been revolutionized simultaneously. Built-in camera recognition and detection is a recent development which takes advantage of an internal mobile camera to track the markers in a real environment. Klemmer et al. [20] used bar-codes as markers to augment paper transcripts with digital video interviews. The system uses CyberCodes reader [21] to identify real objects by a mobile device. Rohs [22] proposed using 2-dimensional graphical widgets to retrieve the relevant digital information by a camera phone. The widget is a generic, reusable, directly manipulated visual code which is suitable for printing on paper. Reilly [8] used RFID tags to create a marker-based mobile system which combines paper maps with digital information. The RFID tags placed in a regular grid set the bottom of the paper map. Wen et al. [23] studied an indoor tracking system with infrared projectors. The projector generates the infrared markers on the workplace and a user needs to wear a tracking camera which can capture the position and orientation of each infrared marker. The infrared projector has to be installed on the ground or a wall, which limits the mobility of users. To improve the portability, we utilized the ultrasonic sensors to detect the location of an object in its environment. It is a fast, inexpensive, and a more portable approach than the previous work. Based on the location information, the system can retrieve the digital information from the smartphone. The touch based screen of smartphone allow the users easily to interact with the digital information.

3. Ultrasonic PhoneLens

3.1. Hardware Components

The Ultrasonic PhoneLens consists of two ultrasonic sound sensors, one Arduino mini-controller board, and one Android mobile device. As shown in Figure 1a, we integrated battery, ultrasonic sensors, and an Arduino board into a box and the box is attached it to the back of the smartphone. It is a portable and low cost system.
  • The physical size of an Arduino board with Ultrasonic sound sensors is very compact and space-saving, which makes Ultrasonic PhoneLens easy to use and carry.
  • Ultrasonic sound sensors and Arduino boards (<$30) are inexpensive. Furthermore, smartphones and paper documents are quite common in our daily life.
The biggest challenge in designing a movement-based interface is detecting the real-time position of user’s focus. Ultrasonic PhoneLens provides a spatially aware display which is based on ultrasonic sensing. Ultrasonic sound is a high-frequency reflective sound wave. Thus, the travel time of the ultrasonic pulses can be used to determine the distance within a certain time period. To measure the distance in use of ultrasonic senor, it is necessary to utilize obstacles for calculating the traveling time of sound waves and then translate them into distance. We set up two wood barriers at the left and bottom borders of the paper to reflect the waves from sensors. The distance to the edges of a paper indicates the current position, which is the users’ focus in the paper. Then it displays the corresponding digital information in the paper. The HC-SR04 ultrasonic sensor has a transmitter and receiver [24]. It can emit an ultrasonic pulse every 600 µs and then detect the reflection wave. Arduino board is a microcontroller-based circuit board. The ultrasonic sound sensor was assembled on the Arduino board. The smartphone connects the Arduino via Bluetooth.
The usage of Ultrasonic Phonelens is shown in Figure 1b. A paper document is attached to a wall. A user uses the mobile phone over the paper. We divided the paper into a rectangular coordinate system and set the lower left corner of the paper as the origin, i.e., (0, 0). The ultrasonic sound sensors detect distance to the edges of the paper and the Arduino board sends the distance data (coordinates) to the mobile device. Based on the coordinates, the smartphone can display corresponding digital information.
The motion of device should show consistent speed over the range of paper. Fast movement will not lose the accuracy but the display image will be delayed in the case of fast movement. The reason is that the smartphone’s refreshing rate cannot be updated with a fast speed of movement. After a multi-step data optimization explained in Section 3, the measurement functionality of the device has a ranging accuracy that can reach up to 0.5 mm. Therefore, variance of normal moving speed doesn’t affect the result.

3.2. System Architecture Overview

Overview of the system architecture is in Figure 2. The system has five layers: spatial data layer, graphical layer, presentation layer, function layer, and communication layer. The spatial data layer receives the raw distance data from the Arduino board. The graphical layer is mainly focused on parsing the SVG (Scalable Vector Graphics) files and retrieving digital information from the mobile database. The presentation layer calculates the distance data and gets the location of the mobile within a paper-based workplace. Then, it generates a visual display of the corresponding digital information. The function layer provides system functions for users to manipulate digital information over the paper document. The communication layer creates an asynchronous communication with another mobile system.

3.3. Data Preprocessing

An ultrasonic sound wave is not a stable signal because the wave is strongly affected by temperature, magnetic forces, and air density. In addition, other factors such as a user’s hand shaking and rotation can cause unstable position information and reduce functionality for the user. To enhance smoothness and accuracy, the raw data in the presentation layer will be preprocessed before being used. The data preprocessing are divided into the following steps:
1. 
Previous data comparison;
2. 
Average distance calculation;
3. 
Linear regression; and
4. 
Noise elimination.

3.3.1. Previous Data Comparison

The interval of each pulse from the ultrasonic sensor is 600 µs. The differences between high frequency distance data can cause the image to jump. To address this issue, we compare the previous distance data and the current distance data. If the current data is not significantly different from the previous data, the system uses the previous data instead. We use the standard deviation as the threshold. The predefined point is marked on the paper and the spacing distance between each point is 10cm. Data that is preprocessed by previous data comparison is in Table 1.

3.3.2. Average Distance Calculation

The ultrasonic sound sensor can emit a sound wave every 600 µs which means the system can receive the distance data every 600 µs. The refresh rate of a digital image is 3000 µs. The sensor receives five records of distance each time the image refreshes but only one record of the distance is useful. To increase accuracy, we recorded all five instances of distance and removed the maximum and minimum distances. Then, we calculated the average of the remaining three distance data points to get the best approximation of the real environment. Data processed by average distance calculation is in Table 2.

3.3.3. Linear Regression

We use a Linear Regression [25] to model the relationship between the distance detected by Ultrasonic PhoneLens and the actual distance. The linear regression model generates a calibration line based on the value of the predefined points in previous steps. The slope of the regression line is the compensation for the distance error in the Ultrasonic PhoneLens. For example, once the distance data was produced by the Ultrasonic PhoneLens, the distance needs to be multiplied by the slope of this regression line to reproduce the value most close to the actual distance. In Figure 3, the slope of the regression line (1.0002) means the preprocessed distance data is very close to the actual distance.

3.3.4. Noise Elimination

It is common that the user rotates the mobile device unintentionally when he/she browses a paper document. The deviation angle causes inaccurate position information. To prevent the wrong viewing caused by user’s rotation, an integrated sensor called a magnetometer is used in the mobile device. We calibrate the degree from the magnetometer beforehand to get the degree of deviation angle. The real distance is calculated by the distance data multiplied by the cosine of the degree of deviation angle.

3.3.5. Accuracy Evaluation

Accuracy is a significant element for the performance of the Ultrasonic PhoneLens. The points which have been predefined are tested to evaluate accuracy after the data has been preprocessed. The accuracy was calculated by comparing the optimized distance data with the actual distance. We use the absolute error to analyze the accuracy of the system. Absolute error is a measure of statistical accuracy to calculate how close forecasts or predictions are to the eventual outcomes [26].
Δ x | x 0 x | / x
Equation (1) is used to get the absolute error rate from different devices where x 0 is the inferred value from the Ultrasonic PhoneLens and x is the real distance. Error rates of distance data for five points is shown in Table 3.

3.4. Multi-User Communication

In the multi-user mode, ghost echo usually appears when users control the Ultrasonic PhoneLens at the same time. This is because sound waves can collide with each other, thus the sensor sometimes receives the pulses from other sensors.
To solve this, we designed an asynchronous communication. Two mobile devices need to be connected together via Bluetooth. They exchange messages during the communication. The Ultrasonic PhoneLens does not start to work until it receives a message from the other machine. The message indicates that the other machine has finished the distance data collection. More than two devices are possible if the smartphones use ethernet or wifi as communication media.

4. Empirical Evaluation

4.1. Methodology

The purpose of the empirical evaluation is to investigate the performance Ultrasonic PhoneLens in helping users acquire digital information when they browse paper documents. We compare Ultrasonic PhoneLens with the first generation of the system, Wiimote PhoneLens [9], a paper-based browsing system which uses a Wiimote and infrared LED lights to augment paper-based workplaces with digital information.
According to the Goal Question Metric (GQM) approach [27], the goals and hypothesis of our study are listed as follows:
Goal 1:
Analyze PhoneLens and Ultrasonic PhoneLens for the purpose of evaluating the efficiency on multivalent documents.
Hypothesis 1:
Participants using Ultrasonic PhoneLens need significantly less time than participants using Wiimote PhoneLens to locate an object.
Goal 2:
Analyze the function of multi-user communication for the purpose of improving the usability of Ultrasonic PhoneLens.
Hypothesis 2:
Users operating the Ultrasonic PhoneLens can communicate effectively and share annotations on digital paper with other end users.

4.2. Subjects and Evaluation Document

The subjects of the study were 28 undergraduate/graduate students who were studying at North Dakota State University. They had a variety of majors and did not have specific backgrounds with respect to this study area.
The documents used in this study were from an architectural plan for an interior wall structure. An architectural plan includes four components: (a) wall plan, (b) lighting plan, (c) electric power plan, and (d) audio plan. The wall plan is a scaled diagram which shows the internal layout of the wall. The electric power plan shows the locations of circuit breakers, electrical sockets, and the wiring between them; the audio plan shows the locations of the doorbell as well as its wiring; and the lighting plan shows the locations of lights, switches and the wiring between them. These plans are overlapped as layers in an architecture plan. Information for each layer is saved and displayed as a separate and independent digital file.

4.3. Tasks

To test if Ultrasonic PhoneLens outperforms Wiimote PhoneLens, we designed navigation tasks and multi-user communication tasks in Study. Participants needed both devices to achieve the tasks that required them to navigate in different layers of an architecture plan. In addition, multi-user communication of this study was evaluated. This was motivated by the fact that Wiimote PhoneLens has no capability of sharing digital information with other users.

4.3.1. Navigation Task

The subjects were asked to use the Ultrasonic PhoneLens and the Wiimote PhoneLens separately, to identify the location of known objects in the paper-based wall frame architecture plan (e.g., searching for the position of a circuit breaker in the electrical plan or a door bell in the audio plan). A piece of white paper covers the paper-based wall plan, thus eliminating the possibility that the participant would just locate the object with their eyes. In our experiment, there are two targets which have been set up in the navigation task.
a. 
Browse the audio plan, and find the position of the junction box on the plan.
b. 
Browse the electric power plan, and find the position of a circuit breaker on the plan.
Due to the randomness and uncertainty about the searching location, the time to find the object is affective by the searching location rather than the performance of the evaluated systems. To solve this, a search pattern is designed and used in the experiment. The search pattern was a snake formation search which can systematically cover all of the area of the paper document. In order to find the position of the junction box on the white paper, participants browse the audio plan using the search pattern shown in Figure 4. The initial location of the mobile device is at the bottom-left corner and the position of junction box is on the top-right corner. The participants start from the initial location with the mobile device.
An overview of the evaluation procedure is shown in Figure 5. A posttest and repeated measures methodology [28] is applied in this study. All subjects were asked to complete two tasks sequentially. In our empirical study, the subjects were introduced to the wall frame architecture plan. After that, the subjects were divided into two groups. The order of training and testing on each system for each group was reversed. So, in the training section, subjects were trained on how to use the Wiimote PhoneLens and Ultrasonic PhoneLens systems. Next, the investigator asked users to finish tasks to examine the efficiency of each system. The performance of each system was then evaluated by the investigator. He recorded the times of each task that the participants performed in order to evaluate the systems’ efficiency. At the end, the two groups of participants were gathered together to finish a task which only applied to the Ultrasonic PhoneLens. Their usability experience was measured by their post-study questionnaire.

4.3.2. Multi-User Communication Task

The multi-user communication tasks were designed to simulate the working conditions of engineers who may need to share digital information remotely. Communication between multiple users is needed when the users are collaborating in a task. For instance, in a scenario that the two architects are working in separate rooms with the same architecture plan. In the experiment, the investigator serves as one of the architects. The participating subject is the other architect. The task is described as follows:
The investigator adds the annotation of a target object (an electricity plug) as shared information in room A. The participant was asked to find the annotation while in room B, where the investigator had placed it. The communication allows multi-user interaction on digital documents without the physical space constraints/barriers.

5. Result and Analysis

5.1. Comparison of Ultrsonic Phonelens and Wiimote Phonelens

Wiimote PhoneLens [9] had achieved a better performance than paper-based approaches. This paper compares Ultrasonic Phoneless with Wiimote PhoneLens (called PhoneLens in this study).

5.1.1. Browsing Efficiency

We first compared browsing efficiency. Figure 6 plots the average time taken by subjects in the two navigation tasks (task a: searching for a junction box and task b: searching for a circuit breaker).
During the navigation task, the performance of Ultrasonic PhoneLens was shown to be more efficient than PhoneLens. In task a, the average time of using Ultrasonic PhoneLens is around 15 s faster than that of using the PhoneLens system. In task b, the gap is narrowed to 10 s because of the learning effect (the PhoneLens system was used after the user finished the Ultrasonic PhoneLens task). Therefore, the average time for using PhoneLens and Ultrasonic PhoneLens in navigation tasks was 99.65 s and 86.81 s, respectively. Ultrasonic PhoneLens (p-value: 0.003 ≤ 0.005) is significantly more efficient than PhoneLens.
The reason that Ultrasonic PhoneLens exhibits better performance is that the Ultrasonic PhoneLens is more flexible for users to control the system. Ultrasonic PhoneLens allows users to face the front of the paper to browse the information whereas a PhoneLens user has to hold the mobile phone at the left side of paper. Wiimote has been set at the right side of the workspace in order that it can cover the entire paper document. To avoid blocking the Wiimote camera, users have to stand at the left side of the workplace and this can increases the time consuming when user is browsing the right area. The Ultrasonic PhoneLens, on the other hand, utilizes an ultrasonic sensor to detect position. The sensors are assembled on the bottom of the mobile so users do not need to consider where the specific place is for it to work. It provides a non-constrained environment for users without any blind areas.

5.1.2. Subjective Feedback

The rating results of five characteristics (C1 through C5, in Figure 7) are calculated in the overall experience of using PhoneLens and Ultrasonic PhoneLens systems. The 5-point scale of the questionnaire is divided into two categories, “Disagree” (a rating value ≤ 3) and “Agree” (a rating value of 4 or 5). We calculated the percentage of “Disagree” and “Agree” in each characteristic to analyze which system is more useful and user-friendly. Overall, a percentage agreement and disagreement for the Ultrasonic PhoneLens compared to Wiimote PhoneLens is shown in Figure 7.
We used a non-parametric, one-sample Wilcoxon Signed-Rank test to analyze the results. In characteristic C1, the positive mean rank of Ultrasonic PhoneLens (Subjects think Ultrasonic PhoneLens performs better than PhoneLens) is 9.90 and the negative mean rank (Subjects think PhoneLens performs better than Ultrasonic PhoneLens) is 7.50. Therefore, the subjects feel significantly more comfortable when they are using the Ultrasonic PhoneLens system, relative to the PhoneLens (p-value is 0.004). In characteristic C2, the positive mean rank for Ultrasonic PhoneLens is 9.50 and the negative one is 8.73. The difference, in this characteristic, between the two systems can be ignored (p-value is 0.315). The results of C3 and C4 are also similar to C2. Positive rank and Negative rank for C3 are 9.00 and 9.82 with a p-value of 0.275. The positive rank and negative rank for C4 are 10.71 and 8.73 with a p-value of 0.623. Consequently, we can conclude that the readability of the PhoneLens is marginally better than the Ultrasonic PhoneLens. In C5, the positive rank of the Ultrasonic PhoneLens (7.64) is marginally higher than the negative rank (7.64) with a p-value of 0.029. It shows that the Ultrasonic PhoneLens is more functional for new users and the learning process of using Ultrasonic PhoneLens is superior to that of PhoneLens.
The results indicate that the participants are prone to use the Ultrasonic PhoneLens when they are browsing the architectural plan. One reason is that subjects frequently block the Wii-mote camera when they are using PhoneLens to browse, especially when they are browsing the right side of the document where the Wii-mote is set up. This often results in unexpected system interrupts and negatively impacts the user experience. Some people try to stand to the side of the workplace to avoid this problem but this is a very uncomfortable way for the user to use the system. The Ultrasonic PhoneLens allows the user to stand comfortably in front of the workplace; however, the hand which holds the Ultrasonic PhoneLens can interfere with the ultrasonic sound if a certain part of their hand blocks the sensor. It may result in a loss of accuracy and reduce the readability when the user is browsing the plan. This is probably the reason that the evaluation of the Ultrasonic PhoneLens is lower than that of the PhoneLens in characteristics C2, C3, and C4.

5.2. Evaluation on the Performance of Multi-Users Communication

The performance of multi-user communication by PSSUQ (Post-Study-System Usability Questionnaire) is analyzed. This questionnaire consists of three sub-scales: system quality, information quality, and interface quality. The three sub-scales are averaged to obtain the overall satisfaction score as the evaluation of this function. Averaging the answered items is a good way to calculate the scale score and enhances the flexibility of use of the questionnaire [29]. For example, the subjects can skip the question, “Did the system give an error message that clearly told me how to fix a problem”, if they did not make any mistakes and an error message was not displayed. Therefore, averaging items to obtain scale scores have no effect on the importance of the statistical properties of the scores.
As shown in Figure 8, the average scale score of system quality is 6.77, information quality is 6.15, interface quality is 6.58, and the overall satisfaction score is 6.5 out of 7. It shows that the performance of the multi-user communication is appreciated by the participants.

6. Conclusions and Future Work

This paper presents a novel, low cost, spatially aware, mobile system, called Ultrasonic PhoneLens. A user can use this system to dynamically visualize the digital information within a paper-based workspace. The main benefits of the system are ease-of-use and multi-user communication. The system applies an asynchronous network to avoid the effect of a ghost echo. Data preprocessing enhances the accuracy and performance.
An empirical study was designed to evaluate the usability and user experience of this system. Specifically, we compared Ultrasonic PhoneLens with Wiimote PhoneLens and obtained positive feedback. In our future work, we plan to compare Ultrasonic PhoneLens with other current augmented paper browsing systems. We also plan to use Ultrasonic PhoneLens over other real objects instead of paper documents, such as browsing a wall to replace the function of a stud finder. More evaluation on workaday tasks will be in our future work.

Acknowledgments

The Authors would like to thank the students from North Dakota State University who participated in the experiments. The authors also would like to thank the anonymous reviewers for their valuable comments and constructive suggestions to improve the quality of the paper.

Author Contributions

Di Wang implemented the algorithm and conducted the experiments. Chunying Zhao and Jun Kong supervised the project, designed the overall system, and analyzed the evaluation results.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gherghina, A.; Olteanu, A.; Tapus, N. A marker-based augmented reality system for mobile devices. In Proceedings of the 11th IEEE Roedunet International Conference (RoEduNet), Sinaia, Romania, 17–19 January 2013; pp. 1–6. [Google Scholar]
  2. Rohs, M. Marker-based embodied interaction for handheld augmented reality games. J. Virtual Real. Broadcast. 2007, 4, 1860–2037. [Google Scholar]
  3. Nishino, H. A shape-free, designable 6-DoF marker tracking method for camera-based interaction in mobile environment. In Proceedings of the 18th ACM International Conference on Multimedia, Firenze, Italy, 25–29 October 2010; pp. 1055–1058. [Google Scholar]
  4. Sun, K.; Yu, J. Video affective content recognition based on genetic algorithm combined HMM. Proceedings of 2007 International Conference of Entertainment Computing, Berlin/Heidelberg, Germany, 2007; pp. 249–254. [Google Scholar]
  5. Arif, T.; Singh, T.; Bose, J. A system for intelligent context based content mode in camera applications. In Proceedings of the ICACCI, 2014 International Conference on Advances in Computing, Communications and Informatics, New Delhi, India, 24–27 September 2014; pp. 1504–1508. [Google Scholar]
  6. Mottola, L.; Cugola, G.; Picco, G.P. A self-repairing tree topology enabling content-based routing in mobile ad hoc networks. IEEE Trans. Mob. Comput. 2008, 7, 946–960. [Google Scholar] [CrossRef]
  7. Reilly, D.; Rodgers, M.; Argue, R.; Nunes, M.; Inkpen, K. Marked-up maps: Combining paper maps and electronic information resources. Pers. Ubiquit. Comput. 2006, 10, 215–226. [Google Scholar] [CrossRef]
  8. Lee, J.C. Hacking the nintendo wii remote. Perv. Comput. IEEE 2008, 39–45. [Google Scholar] [CrossRef]
  9. Roudaki, A.; Kong, J.; Walia, G.S. PhoneLens: A low-cost, spatially aware, mobile-interaction device. IEEE Trans. Hum. Mach. Syst. 2014, 44, 301–314. [Google Scholar]
  10. Anoto, A.B. Development Guide for Services Enabled by Anoto Functionality. 2002. Available online: http://www.citeulike.org/user/johnsogg/article/4294876 (accessed on 4 June 2017).
  11. Liao, C.; Guimbretieere, F. Evaluating and understanding the usability of a pen-based command system for interactive paper. ACM Trans. Comput. Hum. Interact. 2012, 19. [Google Scholar] [CrossRef]
  12. Erol, B.; Antunez, E.; Hull, J.J. Hotpaper: Multimedia interaction with paper using mobile phones. Proc. Multimed. 2008, 399–408. [Google Scholar] [CrossRef]
  13. Garcia, J.; Tsandilas, T.; Agon, C.; Mackay, W. PaperComposer: Creating interactive paper interfaces for music composition. In Proceedings of the 26th Conference l’Interaction Homme-Machine, Lille, France, 28–31 October 2014; pp. 1–8. [Google Scholar]
  14. Pietrzak, T.; Malacria, S.; Lecolinet, E. S-Notebook: Augmenting Mobile Devices with Interactive Paper for Data Management. In Proceedings of the International Working Conference on Advanced Visual Interfaces, Capri Island, Italy, 21–25 May 2012; pp. 733–736. [Google Scholar]
  15. Song, H.; Guimbretiere, F.; Grossman, T.; Fitzmaurice, G. MouseLight: Bimanual interactions on digital paper using a pen and a spatially-aware mobile projector. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 2451–2460. [Google Scholar]
  16. Margetis, G.; Zabulis, X.; Koutlemanis, P.; Antona, M.; Stephanidis, C. Augmented interaction with physical books in an Ambient Intelligence learning environment. Multimed. Tools Appl. 2013, 67, 473–495. [Google Scholar] [CrossRef]
  17. Jee, H.K.; Lim, S.; Youn, J.; Lee, J. An augmented reality-based authoring tool for E-learning applications. Multimed. Tools Appl. 2014, 68, 225–235. [Google Scholar] [CrossRef]
  18. Hansen, T.R.; Eriksson, E.; Lykke-Olesen, A. Mixed interaction space: Designing for camera based interaction with mobile devices. In Proceedings of the CHI’05 Extended Abstracts on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 1933–1936. [Google Scholar]
  19. Masutani, Y.; Dohi, T.; Yamane, F.; Iseki, H.; Takakura, K. Augmented reality visualization system for intravascular neurosurgery. Comput. Aided Surg. 1998, 3, 239–247. [Google Scholar] [CrossRef] [PubMed]
  20. Klemmer, S.R.; Graham, J.; Wolff, G.J.; Landay, J.A. Books with voices: Paper transcripts as a physical interface to oral histories. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’03), Ft. Lauderdale, FL, USA, 5–10 April 2003; pp. 89–96. [Google Scholar]
  21. Rekimoto, J. CyberCode: Designing Augmented Reality Environments. In Proceedings of the DARE: Designing Augmented Reality Environments, Elsinore, Denmark, 12–14 April 2000; pp. 1–10. [Google Scholar]
  22. Rohs, M. Visual code widgets for marker-based interaction. In Proceedings of the 25th IEEE International Conference Distributed Computing Systems Workshops, Columbus, OH, USA, 6–10 June 2005; pp. 506–513. [Google Scholar]
  23. Wen, D.; Huang, Y.; Liu, Y.; Wang, Y. Study on an indoor tracking system with infrared projected markers for large-area applications. In Proceedings of the 8th International Conference Virtual Reality Continuum and its Applications in Industry, Yokohama, Japan, 14–15 December 2009; pp. 245–249. [Google Scholar]
  24. Kuantama, E.; Setyawan, L.; Darma, J. Early flood alerts using short message service (SMS). In Proceedings of the International Conference System Engineering and Technology (ICSET), Bandung, Indonesia, 11–12 September 2012; pp. 1–5. [Google Scholar]
  25. Freedman, D.A. Statistical Models: Theory and Practice; Cambridge University Press: Cambridge, UK, 2009; p. 26. [Google Scholar]
  26. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions With Formulas, Graphs, And Mathematical Tables, 9th Printing; Dover: New York, NY, USA, 1972; p. 14. [Google Scholar]
  27. Basili, V.R.; Caldiera, G.; Rombach, H.D. The Goal Question Metric Approach; Technical Report; Department of Computer Science, University of Maryland: College Park, MD, USA, 1994. [Google Scholar]
  28. Cozby, P. Methods in Behavioral Research, 10th ed.; McGraw Hill: New York, NY, USA, 2009. [Google Scholar]
  29. Winkler, C.; Seifert, J.; Reinartz, C.; Krahmer, P.; Rukzio, E. Penbook: Bringing pen+ paper interaction to a tablet device to facilitate paper-based workflows in the hospital domain. In Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces, St. Andrews, Scotland, UK, 6–9 October 2013; pp. 283–286. [Google Scholar]
Figure 1. Ultrasonic PhoneLens.
Figure 1. Ultrasonic PhoneLens.
Symmetry 09 00137 g001
Figure 2. Ultrasonic PhoneLens System Architecture.
Figure 2. Ultrasonic PhoneLens System Architecture.
Symmetry 09 00137 g002
Figure 3. Ultrasonic PhoneLens linear regression.
Figure 3. Ultrasonic PhoneLens linear regression.
Symmetry 09 00137 g003
Figure 4. Searching pattern on the audio plan.
Figure 4. Searching pattern on the audio plan.
Symmetry 09 00137 g004
Figure 5. The procedure of user study design.
Figure 5. The procedure of user study design.
Symmetry 09 00137 g005
Figure 6. Average time spent on Task a and Task b.
Figure 6. Average time spent on Task a and Task b.
Symmetry 09 00137 g006
Figure 7. Subjective feedback on PhoneLens and Ultrasonic PhoneLens.
Figure 7. Subjective feedback on PhoneLens and Ultrasonic PhoneLens.
Symmetry 09 00137 g007
Figure 8. Evaluation on the performance of multi-users communication.
Figure 8. Evaluation on the performance of multi-users communication.
Symmetry 09 00137 g008
Table 1. Distance data after previous data comparison.
Table 1. Distance data after previous data comparison.
Data Processed by Previous Data Comparison
Predefined Point12345
Actual Distance (cm)1020304050
Average Distance (cm)10.0220.4430.3339.2550.26
Standard Deviation0.090.100.060.120.15
Table 2. Distance data after average distance calculation.
Table 2. Distance data after average distance calculation.
Data Processed by Average Distance Calculation
Predefine Point12345
Actual Distance (cm)1020304050
Average Distance (cm)10.0420.2430.1039.7050.28
Standard Deviation00.090.1000.12
Table 3. Error rates of distance data.
Table 3. Error rates of distance data.
Point12345
Actual Distance (cm)1020304050
Raw Data Error Rate0.56%1.68%1.13%0.27%0.57%
Processed Data Error Rate0.01%2.14%1.17%0.02%0.01%

Share and Cite

MDPI and ACS Style

Wang, D.; Zhao, C.; Kong, J. The Fusion of an Ultrasonic and Spatially Aware System in a Mobile-Interaction Device. Symmetry 2017, 9, 137. https://doi.org/10.3390/sym9080137

AMA Style

Wang D, Zhao C, Kong J. The Fusion of an Ultrasonic and Spatially Aware System in a Mobile-Interaction Device. Symmetry. 2017; 9(8):137. https://doi.org/10.3390/sym9080137

Chicago/Turabian Style

Wang, Di, Chunying Zhao, and Jun Kong. 2017. "The Fusion of an Ultrasonic and Spatially Aware System in a Mobile-Interaction Device" Symmetry 9, no. 8: 137. https://doi.org/10.3390/sym9080137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop