Next Article in Journal
A Sensitive Sensor Cell Line for the Detection of Oxidative Stress Responses in Cultured Human Keratinocytes
Next Article in Special Issue
A Trustworthy Key Generation Prototype Based on DDR3 PUF for Wireless Sensor Networks
Previous Article in Journal
Correction: Rozenstein, O., et al. Derivation of Land Surface Temperature for Landsat-8 TIRS Using a Split Window Algorithm. Sensors 2014, 14, 5768–5780
Previous Article in Special Issue
A Distributed Air Index Based on Maximum Boundary Rectangle over Grid-Cells for Wireless Non-Flat Spatial Data Broadcast
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Android Platform Based Smartphones for a Logistical Remote Association Repair Framework

1
Integrated Logistic Support Centre, Chung-Shan Institute of Science and Technology, No.481, Sec. chia an, Zhongzheng Rd., Longtan Shiang, Taoyuan County 325, Taiwan
2
Electronic Engineering Department, Chienkuo Technology University, No.1, Chieh Shou N. Road, Changhua City 500, Taiwan
3
Department of Electrical Engineering, National Dong Hwa University, No. 1, Sec. 2, Da Hsueh Rd., Shoufeng, Hualien County 97401, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2014, 14(7), 11278-11292; https://doi.org/10.3390/s140711278
Submission received: 14 March 2014 / Revised: 13 June 2014 / Accepted: 16 June 2014 / Published: 25 June 2014

Abstract

: The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use.

1. Introduction

Identifying circuit modules remotely through the network and replacing invalid ones in time is a crucial issue in logistics. To maintain a high-level of logistics support, large-scale network systems, such as rapid transit systems, telecommunication networks or power systems, should be properly maintained. Specifically, some sub-systems or modules are often very far away from the repair station and the system maintenance is usually very costly. In view of this, the development of remote association repair technology (RART) is in high demand for logistics to reduce repair costs and time. The concept of remote maintenance has been developed for decades, however, in the early period remote maintenance technology was centered on internet technology. In recent years, the applications of remote maintenance technology have drawn much attention with the development of wireless transmission technology. Specifically, in manufacturing [1], control and robot systems [24], weapon systems [5] and logistics applications [6], remote maintenance technology plays an important role. Nevertheless, most research seems to focus on the diagnosis of the equipment, but it is generally not an easy task to automatically perform fault detection of invalid modules.

In this paper, we have developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in efficiently and effectively maintaining the operation of systems. LRARF, as shown in Figure 1, was established for aiding in faulty module detection and maintenance by the integration of QR-code technology, image identification, wireless transmission and intelligent data mining technologies. The architecture of LRARF includes four parts: smart mobile phones, DBMS, a MSC and wireless networks. The transmission of LRARF is performed through High-Speed Downlink Packet Access (HSDPA) or WiFi networks. In this framework, invalid modules' images can be sent back to the DBMS and the MSC through the APP. The experimental results reveal that the images of invalid modules can be sent back to the DBMS and the MSC through the APP and the image recognition algorithm is capable of identifying the invalid module. The corresponding maintenance manual for an invalid module is then sent via e-mail to a smart phone by maintenance personnel. In addition, voice and the live video can be recorded synchronously on the MSC for later use.

2. Logistical Remote Association Repair Framework

2.1. The Remote Association Repair Framework Process

The Logistical Remote Association Repair Framework process is shown in Figure 2. In this process, we design four operation stages to identify a module:

  • Stage 1 is the invalid module identification by repairmen.

  • Stage 2 is the QR-code recognition of invalid modules.

  • Stage 3 is the image recognition of invalid modules.

  • Stage 4 is to assist identification by the MSC engineers.

In Stage 1, repairmen identify the failed module. If the modules can be identified, then the modules will be repaired. If the modules cannot be recognized, then the repairman may capture the module QR-codes (Stage 2). If the QR-codes are damaged, the repairman uses a smart mobile phone to capture an image of the module (Stage 3). The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via our algorithm.

Furthermore, the image features will be extracted by our APP and transferred to a cloud database to search for the best matching module. Moreover, DBMS will automatically search for the maintenance manual corresponding to the invalid module and transmit back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. Table 1 shows the four-stage operations and situational modes.

2.2. Database Management System (DBMS)

The architecture of the DBMS is shown in Figure 3. There are three layers in the DBMS. The first layer is called “Index Layer 1”. The image features and QR-codes are stored in this layer. This information will be the indexes for searching the module ID in the second layer of the DBMS. This layer is not only the output layer 1 for the index layer 1, but also the index layer 2 for the next layer. Moreover, the module ID will be the indexes for searching the module sizes, the detection list, the maintenance procedures, the testing list and the operation manual.

3. Image Recognition

3.1. Android Operating System

Android is an open source operating system (OS) and was released by Google in 2007. The operation structure of Android systems can be divided into five layers: Application, Application Framework, Libraries, Android Runtime and Linux Kernel, as shown in Figure 4. An important feature of the Android OS is that the Google provides free SDK and source code for program developers [79]. In this study, Android 2.1 is employed for the application. We developed the video transmission control interface using the Android SDK. Android Runtime is based on the Dalvik Virtual Machine (VM) and Java SE class library. Dalvik VM can be compiled via a Java application. The application program development process is shown in Figure 5.

The Android connection between server and client is illustrated in Figure 6. The connection process consists of the following steps [10]:

  • Step 1: initialize the JavaScript function, “ServerSocket()”.

  • Step 2: open the COM port and the JavaScript function is set to “accept()” for dealing with the connection request from the client.

  • Step 3: the server establishes a connection at the request of the client until both sockets of the server and the client are closed.

3.2. Image Acquisition Application

An Android-based smartphone is utilized in this research to implement the image acquisition. Android is a Linux-Kernel operation system, which is running on Android Runtime. The program development platform is the free Eclipse Software Development Kit (SDK). Therefore, we choose an Android operated mobile phone to catch images and transmit image data via HSDPA. In this study, we used a Motorola Defy MB525 smart phone as the client and the remote computer as the server. The CPU is a 800 MHz TI OMAP3610 and the development platform is Eclipse version 3.6.2. The dpi of the camera is 5 megapixels and the Java Development Kit (JDK) is JDK 6. Figure 7 shows the real-time image capture interface and Figure 8 depicts the program of image acquisition and camera control settings.

The remote real-time image transformation settings are shown in Figure 9. The function “Camera.PreviewCallback” is exploited for real-time image acquisition and transported to server by the function “DataOutputStream()”. On the other hand, the functions “ColorModel()” and “Raster()” are used for receiving and decoding images on server computer. The result of a real-time image on the server computer is shown in Figure 10, and the remote real-time image transformations are established.

3.3. QR-code Recognition

Barcodes were developed in 1950 and have been widely used in a variety of applications due to their high reliability, efficiency and cost-effectiveness. However, barcodes can only record limited information about items. The 2D-barcode was devised to be capable of including a brief description in addition to assigning a number to an item. Moreover, sound, pictures and even traditional Chinese characters, etc., can be recorded as 2D-barcodes. There are many forms of 2D-barcodes in accordance with various commercial applications. For example, in 1994, the Japan Densoe company [11] developed a rapid reaction barcode (Quick Response code, QR-code) for the array code, shown in Figure 11.

The QR-code design is very clever. The most important features of aQR-code are the position marks of the top left, bottom left corner and the upper right corner. Positioning marks are triple concentric square marks, called a position detection pattern (PDP). Three PDP styles are the 3 × 3 black block, 5 × 5 white block and 3 × 3 black block, respectively. Moreover, the width ratio is 1:1:3:1:1. The patterns are almost impossible to repeat. Therefore, QR-codes can be read quickly by rapid positioning and orientation of the PDP.

The calibration of the QR-code is the crucial technology for QR-code decoding. In this paper, the Linear Projection Transformation (LPT) [12] is utilized for QR-code calibration. There exists a projective matrix between warping QR-code and correct QR-code. The transformation is given by:

( x 1 x 2 x 3 ) = ( h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ) ( x 1 x 2 x 3 )
Or X′ = HX Let the inhomogeneous coordinates X and X′ in plane A and A' be (x, y) and (x', y'), respectively. The projective transformation can be written as:
x = h 11 x + h 12 y + h 13 h 31 x + h 32 y + h 33 y = h 21 x + h 22 y + h 23 h 31 x + h 32 y + h 33

Rearranging Equation (2) we have:

h 11 x + h 12 y + h 13 x h 31 x x h 32 y x h 33 = 0 h 21 x + h 22 y + h 23 y h 31 x y h 32 y y h 33 = 0

Equation (3) is an over-determined system, the Direct Linear Transformation (DLT) algorithm [13] is utilized for solving the system. Rewriting Equation (3), the linear system can be expressed in matrix form:

A Ĥ = x ^
where x ^ = [ x 1 y 1 x 2 y 2 x i y i ] T, Ĥ = [h11 h12h33]T, and: align
A = [ x 1 y 1 1 0 0 0 x 1 x 1 x 1 y 1 x 1 0 0 0 x 1 y 1 1 y 1 x 1 y 1 y 1 y 1 x 2 y 2 1 0 0 0 x 2 x 2 x 2 y 2 x 2 0 0 0 x 2 y 2 1 y 2 x 2 y 2 y 2 y 2 x i y i 1 0 0 0 x i x i x i y i x i 0 0 0 x i y i 1 y i x i y i y i y i ]

The algorithm is expressed as follows:

  • Step 1: For each corresponding point compute the matrix A. (I ≧ 4).

  • Step 2: Obtain the SVD (Singular value decomposition) of A.

  • Step 3: Let A = UDVT. The last column of matrix V corresponding to the smallest singular value of A is composed of the elements of the vector .

3.4. Module Image Recognition

The main features of a circuit board are the shape and the number of chips. Figure 12 shows the module image recognition process. Image pre-processing procedures contain the color model transformation and the binarization. Moreover, YcbCr is chosen as the color model, which is less susceptible to the impact of the light. Then we use the Hough Transform [14] to conduct line detection and calculate the edge of the main chip module. We also use the Harris Corner Detection Algorithm [15,16] to calculate the module and the main chip corner. The main principle of Harris corner detector is to use the Gaussian filter to detect the corner response of each pixel in the image. The Gaussian filter can inhibit noise and reduce the probability of misjudgment, so the Harris corner detector has good performance for our application. The first feature of module is to calculate the area ratio of the module and main chips. The second is to calculate the amount of the main chips. The third feature is to calculate the position of the main chip in the module. The database searching steps are shown in Figure 13.

In Figure 13, the Adaptive Network-Based Fuzzy Inference System (ANFIS) is used for data mining in our database. There are five layers in ANFIS architecture [17,18] which are described as follows:

  • Layer 1: Every node i in this layer is an adaptive node with a node function μj(xi). In this paper, the generalized Gaussian membership function: align

    μ j ( x i ) = exp [ ( x i c j i a j i ) 2 ] , for j i = 1 , , k
    is used, where xi is the input of node i and μj is a linguistic label associated with this node, and cji , aji are called premise parameters.

  • Layer 2: Every node in this layer is a fixed node whose output is the product of all the incoming signals:

    w p = i = 1 N μ j i ( x i ) , for p = 1 , , P ; i = 1 , , N

  • Layer 3: Here, the ith node calculates the ratio of ith rule's firing strength to the sum of firing strengths of all rules:

    w ¯ p = w p p = 1 P w p

  • Layer 4: Every node i in this layer is an adaptive node with a node function:

    w ¯ p f p = w ¯ p ( i = 0 N r p i x i ) , x 0 = 1
    where p is a normalized firing strength from layer 3 and rpi,xi are referred to as consequent parameters.

  • Layer 5: The single node in this layer is called the output node, which computes the output as the weighting average of all incoming signals:

    p = 1 P w ¯ p f p = p = 1 P w p f p / p = 1 P w p

The architecture of the ANFIS-based training system is illustrated as Figure 14a. The inputs of ANFIS are the ratio of the main chip to the module, the size and position of the main chip in the module. The output is module ID. For illustration purpose, the training results of the first three sampled modules are depicted in Figure 14b in which twenty six training data marked in blue circles for each sample were used and the training results were marked with a red star. The training results suggest that the ANFIS-based system is quite effective in this application.

4. Experimental Results

The image transmission results are shown in Figure 15. The transmission distances of Figure 15a and Figure 15b are 6 km and 64 km, respectively.

Figure 16 shows the module identification case. The test module is marked by the red block. Yellow and blue blocks are the identified main chips and are marked chip 1 and chip 2, respectively. The area ratio of main chip to module can be calculated from Figure 16. We use a cross to mark the center of area for the corresponding chip. In our experiment, fifteen different modules including the one in Figure 16 were taken as samples for the performance test. Though these samples are similar in appearance, they all have different circuit functionalities. The success rate for each sample is computed as follows:

Success Rate = Number of successful identifications Total number of tests × 100 %

Table 2 shows the results of QR-code and image recognition. The average recognition rate of image is 73.23%. The average computing time is 3.185 seconds. The average recognition rate of QR-code is 96.287%. The average computing time is 0.043 seconds. In practice, the threshold for successful identification rate including image recognition and QR-code recognition is set at 70%. If the success rate is below the threshold, like in the image recognition rates depicted in Table 2 for Module 14 and Module 15, respectively, they should be labelled “Failure” and sent back to the maintenance support centre for repair, as indicated in Figure 2. From Table 2, we know that QR-code recognition is better than others. After the circuit board is identified, an e-mail will be sent to repairmen. The repairmen only need to download and open the attached files. Then, the invalid modules can be repaired following the instructions.

5. Conclusions

In this paper, we have proposed a Logistical Remote Association Repair Framework (LRARF) to help repairmen with the maintenance operations of large-scale network systems keeping the system at a high quality of services level. The repairman can use any kind of smart mobile phone to capture QR-codes and images of fault circuit boards so the invalid modules can be recognized via the proposed algorithm. Specifically, DBMS will automatically search for the maintenance manual for the corresponding invalid modules, and transmit the maintenance instructions back to the repairman. The experimental results not only validate the effectiveness of our proposed Android-based platforms in application to the recognition of invalid modules, but also show that the live video can be recorded on the MSC synchronously.

Acknowledgments

The authors would like to express their gratitude to the anonymous reviewers for their valuable comments and suggestions.

Author Contributions

The work presented here was carried out in collaboration between all authors. Juhng-Perng Su defined the research theme. Juhng-Perng Su, Shao-Fan Lien and Chun-Chieh Wang co-designed methods and experiments. Hong-Ming Chen and Chein-Hsing Wu co-worked on associated data collection and carried out the laboratory experiments. All authors have contributed to, seen and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, J. Strategy and challenges on remote diagnostics and maintenance for manufacturing equipment. Proceedings of the Reliability and Maintainability Symposium, Philadelphia, PA, USA, 13–16 January 1997; pp. 368–370.
  2. Shao, F.; Shao, C.; Sun, R. Design and Application of Remote Fault Maintenance Solution Based on IRL and Checkpointing. Int. Workshop Comput. Sci. Eng. 2009, 1, 233–237. [Google Scholar]
  3. Kubo, M.; Ikeda, H. New Remote Maintenance System (RMS) for Distributed Control System (DCS). Proceedings of the International Joint Conference SICE-ICASE, Busan, Korea, 18–21 October 2006; pp. 5208–5211.
  4. Luo, R.C.; Hsieh, T.C.; Su, K.L.; Tsai, C.F. An intelligent remote maintenance and diagnostic system on mobile robot. IEEE Conf. Ind. Electron. Soc. 2002, 4, 2675–2680. [Google Scholar]
  5. Gu, G.; Hu, J.; Zhang, L.; Zhang, H. Research on remote maintenance support system of surface-to-air missile equipment. Proceedings of the International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering, Chengdu, China, 15–18 July 2013; pp. 1368–137.
  6. Jiang, G. The Remote Fault Maintenance Support System of Logistics Carry Vehicle Based on Network. Proceedings of the International Symposium on Distributed Computing and Applications to Business, Engineering and Science (DCABES), Wuxi, China, 14–17 October 2011; pp. 252–254.
  7. Chen, H.S.; Chiou, J.Y.; Yang, C.Y.; Wu, Y.; Hwang, W.C.; Hung, H.C.; Liao, S.W. Design and implementation of high-level compute on Android systems. Proceedings of the IEEE Symposium on Embedded Systems for Real-time Multimedia, Montreal, QC, Canada, 3–4 October 2011; pp. 96–104.
  8. Yiawoo, F.S.; Sowah, R.A. Design and development of an Android application to process and display summarised corporate data. Proceedings of the IEEE International Conference on Adaptive Science & Technology, Kumasi, Ghana, 25–27 October 2012; pp. 86–91.
  9. Serfass, D.; Yoshigoe, K. Wireless Sensor Networks Using Android Virtual Devices and Near Field Communication Peer-to-peer Emulation. Proceedings of the IEEE Southeastcon, Orlando, FL, USA, 15–18 March 2012; pp. 1–6.
  10. Wu, Y.; Luo, J.; Luo, L. Porting Mobile Web Application Engine to the Android Platform. Proceedings of the IEEE International Conference on Computer and Information Technology, Bradford, PA, USA, 29 June–1 July 2010; pp. 2157–2161.
  11. Zhou, J.; Liu, Y.; Li, P. Research on Binarization of QR Code Image. Proceedings of the International Conference on Multimedia Technology, Ningbo, China, 29–31 October 2010; pp. 1–4.
  12. Shifrin, T.; Adams, M. Linear Algebra: A Geometric Approach, 2nd ed.; W. H. Freeman: New York, NY, USA, 2010. [Google Scholar]
  13. Sonka, M.; Hlavac, V.; Boyle, R. Image Processing, Analysis, and Machine Vision, 4th ed.; Cengage Learning: Stemford, CT, USA, 2014. [Google Scholar]
  14. Shapiro, L.G.; Stockman, G.C. Computer Vision, 1st ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
  15. Lowe, D. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar]
  16. Nixon, M.S.; Aguado, A.S. Feature Extraction & Image Processing for Computer Vision, 3rd ed.; Academic Press: Oxford, UK.
  17. Jang, J.S.R. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar]
  18. Mitra, P.; Maulik, S.; Chowdhury, S.P.; Chowdhury, S. ANFIS based automatic voltage regulator with hybrid learning algorithm. Int. J. Innov. Energy Syst. Power 2008, 2, 1–5. [Google Scholar]
Figure 1. Logistical Remote Association Repair Framework.
Figure 1. Logistical Remote Association Repair Framework.
Sensors 14 11278f1 1024
Figure 2. The LRARF process.
Figure 2. The LRARF process.
Sensors 14 11278f2 1024
Figure 3. The DBMS architecture.
Figure 3. The DBMS architecture.
Sensors 14 11278f3 1024
Figure 4. The architecture of Android systems.
Figure 4. The architecture of Android systems.
Sensors 14 11278f4 1024
Figure 5. The application program developing process.
Figure 5. The application program developing process.
Sensors 14 11278f5 1024
Figure 6. Connection between server and client.
Figure 6. Connection between server and client.
Sensors 14 11278f6 1024
Figure 7. The real-time image capture interface.
Figure 7. The real-time image capture interface.
Sensors 14 11278f7 1024
Figure 8. Image acquisition and camera control setting.
Figure 8. Image acquisition and camera control setting.
Sensors 14 11278f8 1024
Figure 9. The remote real-time image transformation setting.
Figure 9. The remote real-time image transformation setting.
Sensors 14 11278f9 1024
Figure 10. The real-time image on the server computer.
Figure 10. The real-time image on the server computer.
Sensors 14 11278f10 1024
Figure 11. Example of a QR-code.
Figure 11. Example of a QR-code.
Sensors 14 11278f11 1024
Figure 12. Module image recognition process.
Figure 12. Module image recognition process.
Sensors 14 11278f12 1024
Figure 13. The database searching steps.
Figure 13. The database searching steps.
Sensors 14 11278f13 1024
Figure 14. (a) The architecture of ANFIS-Based training system; (b) The ANFIS training results.
Figure 14. (a) The architecture of ANFIS-Based training system; (b) The ANFIS training results.
Sensors 14 11278f14 1024
Figure 15. (a) Real-time image of the Douliou Station; (b) Real-time image of the Changhua Station.
Figure 15. (a) Real-time image of the Douliou Station; (b) Real-time image of the Changhua Station.
Sensors 14 11278f15 1024
Figure 16. Case of Bluetooth module identification.
Figure 16. Case of Bluetooth module identification.
Sensors 14 11278f16 1024
Table 1. Four-stage operations and situational modes.
Table 1. Four-stage operations and situational modes.
StageSituational SettingProcess
11. Fuse is burnedRepair directly
2. Module is identifiable
21. Module is faultyCaptures the QR-Code of module and search by DBMS
2. Module is unidentifiable
3. QR-Code is identifiable
31. Module is faultyCaptures the image of module and search by DBMS
2. Module is unidentifiable
3. QR-Code is unidentifiable neither
41. Module is faultyService by maintenance support center
2. Module is unidentifiable
3. QR-Code is unidentifiable
4. Image of module is unidentifiable
Table 2. The results of QR-code and image recognition.
Table 2. The results of QR-code and image recognition.
No.Image Recognition Rate (%)Image Computing Time (s)QR-code Recognition Rate (%)QR-code Computing Time (s)
166.423.3398.850.03
264.843.9199.510.01
376.671.8397.560.04
472.364.3296.360.08
581.463.5797.810.07
684.012.6991.780.04
770.984.3399.540.06
891.033.1294.440.08
978.981.9193.870.02
1065.772.3196.780.06
1154.453.1296.420.03
1287.653.4499.320.01
1385.414.3599.210.07
1464.211.2390.320.03
1554.324.3292.540.02

Share and Cite

MDPI and ACS Style

Lien, S.-F.; Wang, C.-C.; Su, J.-P.; Chen, H.-M.; Wu, C.-H. Android Platform Based Smartphones for a Logistical Remote Association Repair Framework. Sensors 2014, 14, 11278-11292. https://doi.org/10.3390/s140711278

AMA Style

Lien S-F, Wang C-C, Su J-P, Chen H-M, Wu C-H. Android Platform Based Smartphones for a Logistical Remote Association Repair Framework. Sensors. 2014; 14(7):11278-11292. https://doi.org/10.3390/s140711278

Chicago/Turabian Style

Lien, Shao-Fan, Chun-Chieh Wang, Juhng-Perng Su, Hong-Ming Chen, and Chein-Hsing Wu. 2014. "Android Platform Based Smartphones for a Logistical Remote Association Repair Framework" Sensors 14, no. 7: 11278-11292. https://doi.org/10.3390/s140711278

Article Metrics

Back to TopTop