Next Article in Journal
Newly Designed Identification Scheme for Monitoring Ice Thickness on Power Transmission Lines
Previous Article in Journal
Adaptive Bandwidth Allocation for Massive MIMO Systems Based on Multiple Services
Previous Article in Special Issue
Contactless Real-Time Eye Gaze-Mapping System Based on Simple Siamese Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Keypoint-Based Automated Component Placement Inspection for Printed Circuit Boards

1
Department of Computer Science and Information Engineering, National Taiwan Normal University, Taipei 116, Taiwan
2
Faculty of Computer Science, Free University of Bozen-Bolzano, 39100 Bozen-Bolzano, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9863; https://doi.org/10.3390/app13179863
Submission received: 11 July 2023 / Revised: 25 August 2023 / Accepted: 30 August 2023 / Published: 31 August 2023
(This article belongs to the Special Issue AI-Enabled Internet of Things for Engineering Applications)

Abstract

:
This study aims to develop novel automated computer vision algorithms and systems for component replacement inspection for printed circuit boards (PCBs). The proposed algorithms are able to identify the locations and sizes of different components. They are object detection algorithms based on key points of the target components. The algorithms can be implemented as neural networks consisting of two portions: frontend networks and backend networks. The frontend networks are used for the feature extractions of input images. The backend networks are adopted to produce component inspection results. Each component class can has its own frontend and backend networks. In this way, the neural model for the component class can be effectively reused for different PCBs. To reduce the computation time for the inference of the networks, different component classes can share the same frontend networks. A two-stage training process is proposed to effectively explore features of different components for accurate component inspection. The proposed algorithm has the advantages of simplicity in training for data collection, high accuracy in defect detection, and high reusability and flexibility for online inspection. The algorithm is an effective alternative for automated inspection in smart factories, with growing demand for product quality and diversification.

1. Introduction

With the increasing popularity of consumer electronics products, such as laptops, smartphones, display cards, and tablets, high-quality printed circuit board (PCB) manufacturing is important. Because of this surge in the demand for PCBs in the market, manufacturers are required to produce PCBs in large quantities. However, maintaining the quality of such large numbers of PCBs is challenging. With the advent of computer vision [1] and artificial intelligence [2] techniques, automated computer visual inspection methods are found to be beneficial in terms of improving the performance for high-volume industrial production.
One typical approach for automated computer visual inspection is based on the template-matching method [3,4,5] with a flaw-free reference. Basic template-based approaches accomplish defect detection by measuring the similarity (or dissimilarity) between the given test image and the reference. A common drawback of some template-matching approaches is that proper alignment between the test image and the template is desired for the correlation computation. However, for many applications, the enforcement of alignment operations may be difficult, resulting in degradation in detection accuracy. Furthermore, some of these technique focus only on the inspection of solder joints or bare PCBs without components. Nevertheless, component placement inspection is a significant and challenging problem for PCB manufacturing. Many defects are caused by errors in PCB component placement, such as missing or misaligned components or the incorrect rotation of the components.
Automated component placement inspection on a PCB can be achieved by the employment of semantic segmentation or object detection methods. Semantic segmentation techniques [6,7] aim to separate images under examination into regions. Each individual region belongs to an independent component on PCB [8,9]. One challenging issue for segmentation-based methods is the dense and/or uneven distributions of components on a board. It may be difficult to detect a component in a densely populated region on the PCB [8,9]. In addition, large varieties of components may further degrade the accuracy of component detection. For components with small sizes, lower segmentation accuracy may be introduced due to the class-imbalanced problem [10,11]. Therefore, it would be difficult to adopt segmentation-based surface defect detection techniques [12] for component placement inspection.
Region-based convolutional neural networks (CNNs) [13,14,15,16] for object detection have also been found to be effective. Many such techniques attain high detection accuracy by employing anchor boxes as detection candidates [17,18,19]. Anchor boxes are boxes of various sizes and aspect ratios. A large set of anchor boxes [11] may be required for accurate detection. Subsequently, high computation overhead is usually introduced for both training and inference. In addition, expensive manual labeling efforts are required when components are varied [20].
An alternative to anchor-based approaches is to represent each object as a single [21] or multiple key points [22,23]. For techniques with a single keypoint, the keypoint of an object is the center of the bounding box of the object. When an object is represented by a pair or a triplet of keypoints, each keypoint represents the center or corners of the bounding box. The corresponding object detection operations are equivalent to finding the keypoints of the objects. The need for anchor boxes is then bypassed.
In addition to keypoint-based approaches, neural network (NN) models with a transformer attention mechanism [24,25,26] have been found to be effective for object detection. The employment of transformer attention is beneficial for exploring the spatial correlation for accurate detection of objects. Nevertheless, NN models such as DETR [24] require a high level of network complexity to carry out transformer attention operations. High computation time and large memory sizes are then needed for inference operations. Efforts such as deformable DETR [25] and lite DETR [26] have been proposed to reduce computational latency. However, network sizes are not lowered. It may still be difficult to deploy such algorithms on edge devices with limited memory sizes for inference operations.
For the existing object detection techniques stated above, another common drawback is that the reusability of network models is not taken into consideration. Because of supply chain management for components, even for the same products, it is likely that the corresponding PCBs are updated by the accommodation of new component classes and the removal of old ones. In such cases, it is desired that the NN models be fine-tuned for the updated PCBs. This would involve the retraining of the NN models for the detection of the new components. Nevertheless, because the NN models are shared by all the components, the retraining processes may also introduce variations in detection accuracy for the common component classes adopted by both original and updated PCBs. Therefore, robustness in inspection accuracy would be an important issue.
The objective of this study is to develop novel NN models for component placement inspection on PCBs. NN models have the advantages of low labeling efforts, high object detection accuracy, low computation latency, low network sizes, and high training robustness. Corresponding Internet of Things (IoT) systems are also built for field tests. The proposed algorithm is a keypoint-based technique. Therefore, it has a simple training and inference process without the requirement of anchor boxes. The efforts required for manual labeling can be significantly lowered. In addition, no templates are necessary. The alignment issues associated with template-based techniques can then be avoided.
In the proposed algorithm, the inspection of components from the same class is regarded as a single task. In the architecture, the tasks can be separated into more than one group. Different tasks in the same group may share the same network layers, termed a frontend NN, for feature extraction. Each task may have its own dedicated output layers, termed a backend NN. A novel two-stage training process is adopted for the training of the proposed network. At the first stage, only the frontend network is trained. The training procedure is intended to produce robust features [27,28]. The resulting frontend networks serve as the initial frontend network for the training in the second stage, where both the frontend and backend networks are involved.
When the network size and computational complexities of the algorithm are the important concerns, different tasks in the same group can also share the same backend NN. In these cases, the proposed algorithm is equivalent to the existing keypoint-based techniques [21], where network size may still remain low, even for a PCB consisting of large number of component classes.
For applications where PCBs are constantly updated, the reuse of networks of existing components needs to be taken into consideration so that retraining efforts for the networks can be lowered. In these applications, dedicated frontend and/or backend networks can be assigned to each task. When a new component class is accommodated, training is necessary only for the backend NN for the new class and for the frontend NN for the classes sharing the same group with the new class. For the classes belonging to other groups, the corresponding networks can be directly reused. In this way, the training overhead for the inclusion of new classes can be reduced without large degradation in detection accuracy for the existing classes.
The proposed NN models are deployed in edge devices of IoT systems for real-time component inspection. Although the NN models can be deployed in the cloud, data transmission congestion and delivery latency can be important issues for real-time inspection. The edge devices are able to directly process the images produced by cameras without data delivery over the Internet. However, edge devices often offer only limited computational capacity and memory space. The proposed algorithm, because of its simplicity and high detection accuracy, can be effectively deployed in edge devices, such as Jetson Nano for applications requiring low inspection latency and accurate object detection.
The remainder of this study is organized as follows. In Section 2, we present the proposed algorithm for component placement inspection in detail. Section 3 contains some experimental results for the algorithm. The concluding remarks are then included in Section 4.

2. Proposed Algorithm

In this section, we start with a keypoint-based NN for component placement inspection with a single class. The keypoint-based NN is then generalized to multiple classes with model reuse. The two-stage training procedure for model reuse is then presented in detail. A list of commonly used symbols is shown in Table A1.

2.1. Component Placement Inspection for a Single Class

Figure 1 shows the block diagram of the proposed NN model for single-class component detection. As shown in the figure, the network model can be separated into two portions: frontend network and backend network. The frontend network is used for the feature extraction of the input image. The backend network produces the results for component placement inspection. It provides a heat map indicating the likelihood that each pixel in the image belongs to the component. The size of the component can also be predicted by the backend network.
Let X be an input image of width W and height H. Our goal is to produce a heat map (Y) with a width of W / R and a height of H / R , where R is the output stride size. Let Z be the ground truth of Y. The ground truth is determined by the keypoint of each component in the input image. As shown in Figure 2, the keypoint of a component is the centroid of the component. For the sake of simplicity, assume that we only focus on the detection of components from a single class in the input image. Let ( p , q ) be the location of a component from the class. We then compute Z for the component by splatting the keypoint of the component using a Gaussian kernel, i.e.,
Z ( i , j ) = e ( i p R ) 2 +   ( j q R ) 2 2 σ 2 ,
where Z ( i , j ) is the ( i , j ) -th pixel of the ground truth image ( Z ) , and σ is the standard deviation dependent on the size of the component. In cases in which two or more components of the class are presented, overlapping of the corresponding kernels is likely. We then take the element-wise maximum over the Gaussian kernels for components. Figure 3 shows an example of a ground truth image for the detection of capacitors on the PCB.
Let J 1 be the loss function for a heat map in the training of the proposed network model. A variant of a focal loss function [11] is adopted for the J 1 function, i.e.,
J 1 = i = 1 W / R j = 1 H / R M ( i , j ) ,
where
M ( i , j ) = ( 1 Y ( i , j ) ) α log ( Y ( i , j ) ) ,   if Z ( i , j ) = 1 , ( 1 Z ( i , j ) ) β ( Y ( i , j ) ) α log ( 1 Y ( i , j ) ) ,   otherwise ,
where Y ( i , j ) is the ( i , j ) -th component of heat map Y, and the α > 0 and β > 0 parameters should be prespecified before the training.
In addition to the detection of component location by heat map Y, it may be desired to find the component size for inspection. This can be accomplished by appending a network branch in the backend network, as shown in Figure 1. Let S = { S W , S H } be the output of the branch for size estimation, where S W and S H are the images with a width of W / R and a height of H / R . Furthermore, let K be the number components and W k and H k be the ground truth of the width and height of the k-th component, respectively ( k = 1 , , K ). The loss function for the component sizes is defined as
J 2 = k = 1 K | W k W ^ k |   +   | H k H ^ k | ,
where W ^ k and H ^ k are the estimated width and height of component k from S. Let ( i k , j k ) be the ground truth location of the keypoint of the k-th component ( k = 1 , , K ). The estimation W ^ k and H ^ k can be computed as
W ^ k = S W ( i k , j k ) , H ^ k = S H ( i k , j k ) .
The total loss for the training of the proposed network model, denoted by J T , for a single class is then given by
J T = J 1 + J 2 .

2.2. Component Placement Inspection for Multiple Classes

As shown in Figure 4, the NN model for component placement inspection for multiple classes can be viewed as an extension of its counterpart for a single class. Let N be the number of component classes for the inspection. Consequently, there are N heat maps for the detection of components, where the c-th heat map (Y) is adopted for the detection of components in the c-th class. Likewise, there are N pairs of images for the estimation of component sizes, where the c-th pair { S W , S H } is used for the estimation of component sizes.
From Figure 4, we can see that there are three network models for multiple classes. The first approach, as shown in Figure 4a, is a direct employment of the model for a single class for multiple classes; that is, all the classes share the same frontend network and backend network. When the number of component classes (N) increases, the model size and computation latency for inference can be maintained. However, the shared frontend network and/or backend network may not be matched to a particular class. Therefore, the detection accuracy for the class may be degraded. Furthermore, the reusability of the model may be an important issue. Because all the classes in the model are jointly trained, the incorporation of new classes may result in the retraining of all the classes. Large overhead may then be required for the inspection of new PCBs, where the accommodation of new classes for component inspection is necessary.
The second model allows all the classes to share the same frontend network for feature extraction. Furthermore, each class has its own dedicated output branches in the backend network, as shown in Figure 4b. Because of the sharing of the frontend network, the overall model size and computational complexities can still be low for a large number of classes (N). In addition, because there is a dedicated backend network for each class in the second model, it may outperform the first model for component inspection. Nevertheless, when the incorporation of new classes is desired, it may still be necessary to retrain all the existing classes because the shared frontend network needs to be fine-tuned for the new class, as well as the existing classes.
The third model can be viewed as an extension of the second model. We separate N classes of components into P groups in the third model. The classes belonging to the same group share the same frontend network for feature extraction. Therefore, there are P frontend networks in the third model, as shown in Figure 4c. One simple approach for carrying out grouping is based on the shapes of the components. For example, components with similar shapes can be grouped together. The shape information for each group can then be fully exploited to produce accurate heat maps for component placement prediction. Because there is a dedicated frontend network for each group for feature extraction, the third model may have superior performance over the first and second models for component inspection. Moreover, the networks in the third model could be reused; that is, the accommodation of a new component class may require the training of only the NNs in the group to which the new class belongs. The networks in the other groups can be effectively reused.

2.3. Two-Stage Training Process

The proposed two-stage training process can be applied for the three models proposed in Section 2.2. There are two stages for the training of the models. The training operations in the first stage can be viewed as the pretraining operations for the frontend networks. Based on the results from the first stage, the goal in the second stage is for the refinement for the frontend networks and the complete training for the backend networks. The two-stage training operation is based on the training set with a training images, as denoted by A .

2.3.1. First Stage Training

The goal of training in the first stage for each network model is to provide an effective frontend network for feature extraction. This training process can be viewed as a representation learning process [27] to fully exploit the features for subsequent generation of heat maps for the backend networks. For each training image in A , data augmentation operations are employed to produce b images. Employing data augmentation enables more variations to be included in the training set. Let B i be the set of augmented images derived from the i-th image in A . Furthermore, let B = i = 1 a B i be the set of all augmented images for training.
For a fixed i, it is desired that the frontend network produce similar features for the augmented images from the set of augmented images ( B i ). Conversely, images from different augmented sets should produce different features. For PCB inspection, it is usually desired that the impacts of illumination on inspection accuracy be minimized. Therefore, the goal of data augmentation as reported in this study is to provide images with different illuminations. In this way, the proposed frontend networks are less sensitive to the variations in illumination of the PCBs.
The training in the first stage is carried out on a tuple-by-tuple basis. For each training image ( X i ), a number of tuples are formed, where each tuple contains ( a + 1 ) elements. Let
T = ( X i , B 1 , , B a )
be a tuple for X i , where B j is an image drawn randomly from the set B j , j = 1 , , a . Let T i be the set of tuples from training image X i . The loss function for the training in the first stage is given as
L = i = 1 a T T i log exp ( F ( X i ) T F ( B i ) ) j = 1 a exp ( F ( X i ) T F ( B j ) ) ,
where the function F denotes the frontend network.

2.3.2. Second Stage Training

In the second stage, both the frontend and the backend networks are trained by the training set ( A ). The frontend networks acquired from the first stage are served as the initial frontend networks in the second stage. The initial backend networks are randomly generated. The loss function in (6) is also adopted for the training in the second stage, i.e.,
J = c = 1 N ( J 1 ( c ) + J 2 ( c ) ) ,
where J 1 ( c ) and J 2 ( c ) are the J 1 and J 2 defined in (2) and (4) for the components in class c, respectively.
An advantage of the two-stage training process is that the impact of illumination on placement inspection can be effectively lowered. Based on the representation learning scheme in the first stage, features robust to illuminations can be provided for the subsequent heatmap generation. This is beneficial for attaining high accuracy in detecting components on PCB boards without introducing false alarms.

2.4. Examples of Frontend and Backend Networks

The frontend and backend networks considered in this study are not restricted to any specific type of network. However, for evaluation purposes, examples of frontend and backend networks are provided, as shown in Figure 5. The block diagram of the model and the feature maps produced by each layer of the model are revealed in Figure 5a and Figure 5b, respectively.
We can see from Figure 5 that the complexities for the frontend network are higher than those for the backend network. In the frontend network, residual blocks (ResBlocks) [29] and up–down networks [30] are employed for compact and efficient feature representation. Each ResBlock contains a shortcut [29] for efficient weight updating. Two-dimensional convolution (Conv) networks, together with their transposed (Conv Trans) counterparts, are adopted for the implementation of up–down networks. Both the batch normalization (BN) and rectified linear unit (RELU) activation functions are also included in residual blocks and up–down networks.
Each backend network is dedicated to a single class. To reduce the complexities for the entire component inspection network, each backend network has a simple architecture. We can observe from Figure 5 that there are only two convolution layers for each backend network. The simplicity of backend networks is beneficial for facilitating both the training and inference operations, especially when the number of classes (N) is large for component inspection.

3. Experimental Results

This section provides the experimental results of the proposed work. The experimental setup is first presented. The performance metrics are then discussed. This is followed by numerical results and comparisons among the proposed and existing techniques.

3.1. Experimental Setup

The setup of the experiments is shown in Figure 6, which is a simple IoT inspection platform with a FLIR Blackfly S USB 3 high-resolution industrial camera. The platform, together with an edge device such as PC or Jetson Nano, can be easily integrated into a real production line for online inspection. The development of NN models is based on Keras built on the top of Tensorflow 2.0.
In the experiment results, we consider only the inspection of components of screws, capacitors, mounting holes, and 3-pin chips, and 8-pin chips of the PCB, as shown in Figure 7; that is, there are N = 5 component classes. In many assembly lines, some of these components, such as screws, may be placed on the PCB manually. As a result, component misplacements are likely. Furthermore, it may be difficult to perform accurate component inspection for components such as 3-pin chips because of the complex background and the small component sizes. Successful inspection of the components shown in Figure 7 would be a promising indication of accurate inspection of the other PCB components.
The images for our experiments have an equal size of 512 × 512 ; that is, the height and width of input images are W = H = 512. However, the sizes of different PCBs may vary. When PCB sizes are larger than 512 × 512 , a sliding window operation with raster scanning order is performed over the PCB images, where the window size is 512 × 512 . Each image acquired from the operation is an input image for the proposed algorithm; that is, in this approach, we separate the PCB images in the raster scan fashion into overlapping subimages with an equal size of 512 × 512 for component inspection. This provides a high degree flexibility to accommodate PCBs of different sizes in real-world scenarios.
To increase the variety of the training set, different cropping results from the training PCBs are adopted as training images. Some examples are shown in Figure 8, where the objects to be detected are marked. The training set ( A ) contains a = 180 images. The i-th image in the training set ( A ) is further augmented to from a set ( B i ) containing b = 16 images. After cropping and augmentation of images, 2880 images (e.g, a × b = 2880 ) with dimensions of 512 × 512 are created as the augmented image set ( B ) for training of the proposed NN model. Table 1 shows the division of set B for each component class. From Figure 8, it can be observed that a single training image may contain multiple types of components. Moreover, different training images may consist of different types of components. Therefore, as shown in Table 1, the number of augmented images for each component class is different in the training set ( B ). It can also be observed from Table 1 that although the number of augmented images for the eight-pin chip class is the lowest, there are still 1602 images for the avoidance of overfitting in training.
Table 2 shows the parameters in each layer of the basic NN model considered in this study. For the sake of simplicity, this model contains only one frontend network and one backend network for single-class inspection; the names of layers are defined in Figure 5. More frontend networks and/or backend networks with the same specification can be appended in the model for applications requiring multiple groups with multiple classes. Table 3 shows the specifications of Model 1, Model 2, and Model 3 for five component classes. For meaningful comparisons, as shown in Table 3, all the models have the same dimensions ( 512 × 512 ) for input image X; that is, the original width and height are W = H = 512. In addition, they have the same dimensions ( 128 × 128 ) for output images Y and S. Because W = H = 512, we see that the output stride size is R = 4 for heatmap generation.
We can also observe from Table 3 that Model 1 has the smallest size as compared with Model 2 and Model 3. This is because Model 1 has only a single frontend NN and a single backend NN shared by all the component classes. In contrast, in Model 2, a dedicated backend NN is assigned to each component class. Furthermore, all the component classes are separated into P = 2 groups in Model 3. The screws, mounting holes, and capacitors form the first group, and the three-pin and eight-pin chips are in the second group. In Model 3, each group has its own frontend NN. Therefore, Model 3 has the largest size.

3.2. Performance Metrics

The performance metrics considered in this study include the quality of component placement inspection, network size, and the computation time of the proposed model. The component inspection accuracy metrics, such as the average precision (AP) [31,32] value and F1 score [31,32], are used to indicate the quality of component placement inspection in experiments. Images of PCBs not belonging to training set are adopted as the test set for the evaluation of the AP value and F1 score. The network size is defined as the number of weights in the network. The network size indicates the memory resources required for the deployment of the network. The computation time is the inference latency for the model. It reveals the promptness of the model for inspection.
Both the AP value and F1 score are evaluated by precision and recall rates. For a given component class (c), let TP (true positive) and FN (False negative) be the number of components of class c in the test set that are detected and missed, respectively. Let FP (false positive) be the number of components from other classes in the test set that are falsely identified as components of class c. The precision and recall [31] rates are then defined as
Precision = TP TP + FP , Recall = TP TP + FN .
The measurements of precision and recall rates are based on the testing images extracted from the PCBs shown in Figure 9, which are different from the training images.
Because different threshold values for detection may result in different pairs of precision and recall rates, a precision–recall curve could be obtained by sweeping the threshold values. The AP value is then defined as the area under the precision–recall curve. Higher AP values imply better precision–recall performance.
Given a pair of precision and recall rates, the computation of the corresponding F1 score [31] is given by
F 1   Score = 2 1 Precision + 1 Recall .
The score provides a comprehensive evaluation based on precision and recall. Given a precision–recall curve, the corresponding F1 score is obtained by finding the pair of precision and recall values on the curve attaining the maximum F1 score.

3.3. Numerical Results and Comparisons

Table 4 shows the corresponding AP values and F1 scores of the proposed two-stage training process for Model 1 for different components considered in this study. For comparison purposes, the AP values and F1 scores with only a single-stage training process for Model 1 are also included, where the representation learning for the frontend network is omitted. Model 1 with single-stage training can be viewed as the basic keypoint algorithm [21] for object detection. From Table 4, it can be observed that a two-stage training process is able to achieve higher AP values and F1 scores as compared with its single-stage counterpart. This is because the representation learning operations are beneficial for providing robust features for the subsequent heatmap generation and component size estimations.
Comparisons of AP values and F1 scores between Model 1, Model 2, and Model 3 are included in Table 5. The proposed two-stage training process is adopted for the training of all the models. It can be observed from Table 5 that Model 3 has superior AP values and F1 scores relative to Model 2 and Model 1 for many of the component classes. Model 3 has higher accuracy because there is a dedicated frontend NN for each group of components. In contrast, a single frontend NN is shared by all the component classes. Therefore, it would be difficult for Model 2 and Model 1 to carry out accurate detection for each individual component class.
Figure 10 reveals the precision–recall curves for all the component classes considered in this study for Model 3 with two-stage training operations. It can be observed from Figure 10 that the proposed algorithm is able to maintain high precision, even with a high recall value. In particular, for the class of screws, when the recall value reaches 0.916, the precision value is 0.973. Therefore, the proposed algorithm is able to achieve high detection accuracy without triggering a large number of false alarms.
Figure 11 shows examples of the inspection results for the capacitors for different PCBs. Accurate locations and sizes of the capacitors can still be acquired, even for the images from the testing set. To further demonstrate the effectiveness of the proposed algorithm, Figure 12 reveals examples for the joint inspection for screws, capacitors, mounting holes, three-pin chips, and eight-pin chips. It can be observed from the figure that joint inspection of five components can also be effectively carried out. In fact, the sizes of some of the components, such as three-pin chips, are very small, which may make it difficult to identify the components, even by direct visual inspection. The proposed algorithm is able to provide accurate inspection for small components on complex backgrounds. These examples reveal that the proposed algorithm is effective in improving PCB inspection quality for automatic manufacturing in smart factories.
In Table 6, comparisons of the proposed algorithm with existing works such as the faster region-based convolutional neural network (Faster RCNN) [19], DEtection TRansformer (DETR) [24], single-shot detection with MobileNet (SSD+MobileNet) [33], and you only look once (YOLO) v5 [34] are made to verify inspection quality. Furthermore, because it is desired to deploy the NN models in embedded platforms with limited computation capacity and/or storage size, the computation speed and model size required for inference are important concerns for the corresponding applications. Therefore, as shown in Table 7, we also consider comparisons of the inference latency and model sizes among these algorithms in this study. The inference latency is measured on a personal computer (PC) and an embedded platform. The PC is equipped with an Intel Core I9-9900K CPU and nVidia GeForce RTX3080 Ti GPU. The embedded platform is Jetson Nano with ARM Cortex A57 CPU and nVidia Maxwell GPU.
We can see from Table 6 and Table 7 that the proposed algorithm outperforms many of the existing algorithms in the inspection of components. In fact, the proposed algorithm has a higher AP value and F1 score as compared with those of Faster-RCNN [19], SSD+MobileNet [33], and YOLO v5 [34] for the detection of all components. The proposed algorithm also has AP values and F1 scores comparable to those of DETR [24]. In addition, the proposed algorithm has a significantly lower inference time for PC-based inference. In particular, the inference time of the proposed algorithm and DETR are 21.4 ms and 206.5 ms for PC, respectively. The throughput (in frames per second, FPS) of the proposed algorithm and DETR are 46.73 and 4.84, respectively. The proposed algorithm has a faster computation speed because it has a smaller network size as compared with its DETR counterpart. In addition, it would be difficult to deploy DETR in low-cost edge devices such as Jetson Nano because of its large network size. In contrast, we successfully deployed the proposed algorithm on Jetson Nano. The latency of the proposed algorithm for Jetson Nano is 146.9 ms, that is, the algorithm achieves 6.81 FPS, even for low-cost edge devices. The proposed algorithm therefore has the advantages of high inspection accuracy, low inference latency, small model size, and low-cost deployment. All these preliminary evaluations reveal that the proposed algorithm is promising for real-time, high-accuracy component placement inspection.

4. Conclusions

Experimental results show that the proposed algorithm is effective for component displacement inspection of PCBs. The algorithm provides a simple labeling process for training. The sizes of the proposed networks are also significantly lower than those of existing networks. The weight the size of the proposed Model 3 network is only 3.96% of that of DETR algorithm (i.e., 1,644,751 vs. 41,487,306, respectively). The two-stage training process is effective in feature extraction to enhance the detection accuracy. For example, for the Model 1 network, the F1 scores are improved from 0.8867 with a single-stage training process to 0.9320 by with a two-stage training process. In addition, the F1 scores for mounting hole achieved by the proposed Model 3 network, DETR, and YOLO v5 are 0.9545, 0.9341, and 0.9220, respectively. Therefore, the proposed algorithm can attain superior detection accuracy relative to existing techniques. Furthermore, the algorithm has high model reusability and low computation complexities for inspection. Even for low-cost edge devices such as Jetson Nano, the inference latency of the Model 3 network is only 146.9 ms. These advantages are beneficial for the deployment of the algorithm on edge devices for accurate and fast component inspection over large varieties of PCBs.

Author Contributions

Conceptualization, W.-J.H. and T.-M.T.; methodology, S.-T.C. and W.-J.H.; software, S.-T.C.; validation, S.-T.C. and W.-J.H.; investigation, S.-T.C. and T.-M.T.; resources, W.-J.H.; writing—original draft preparation, W.-J.H.; writing—review and editing, W.-J.H.; visualization, S.-T.C.; supervision, W.-J.H.; project administration, W.-J.H. and T.-M.T.; funding acquisition, W.-J.H. All authors have read and agreed to the published version of the manuscript.

Funding

The original research work presented in this paper was made possible, in part, by the National Science and Technology Council, Taiwan, under grants MOST 111-2622-E-003-001 and MOST 111-2221-E-003-009-MY2.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
APAverage precision
BNBatch normalization
CNNConvolutional neural network
ConvTwo-dimensional convolution
DETRDetection transformer
Faster RCNN Faster region-based convolutional neural network
FNFalse negative
FPFalse positive
IoTInternet of Things
NNNeural network
PCPersonal computer
PCBPrinted circuit board
ResBlockResidual block
RELURectified linear unit
SSDSingle-shot detection
TPTrue positive
YOLOYou only look once

Appendix A. Frequently Used Symbols

Table A1. A list of symbols used in this study.
Table A1. A list of symbols used in this study.
A Set of training images.
aThe number of training images in the training set ( A ).
B i An augmented image randomly drawn from the set B i .
B i Set of augmented images derived from the i-th image ( X i ) of A .
B B = i = 1 a B i is the set of all augmented training images.
bThe number of augmented images in set B i .
FThe function F denotes the frontend network.
HHeight of input image X.
H k      Ground truth of the height of the k-th component.
H ^ k Estimated height of the k-th component. H ^ k can be obtained from S H according to (5).
KNumber of components.
NNumber of component classes for inspection.
PNumber of groups.
ROutput stride size.
S S = { S W , S H } are the results of size estimation for components.
S H Estimation of height of components.
S W Estimation of width of components.
TA tuple containing ( a + 1 ) elements for first-stage training.
XAn input image for component placement inspection.
X i The i-th image of the set of training images ( A ).
YOutput heat map produced by the proposed neural network.
Y ( i , j ) The ( i , j ) -th pixel of the output heatmap (Y).
WWidth of the input image (X).
W k Ground truth of the width of the k-th component.
W ^ k Estimated width of the k-th component. W ^ k can be obtained from S W according to (5).
ZGround truth for the heat map (Y).
Z ( i , j ) The ( i , j ) -th pixel of the ground truth image (Z).

References

  1. Szeliski, R. Computer Vision: Algorithms and Applications; Springer: London, UK, 2011. [Google Scholar]
  2. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  3. Chauhan, A.P.S.; Bhardwaj, S.C. Detection of bare PCB defects by image subtraction method using machine vision. In Proceedings of the World Congress on Engineering, London, UK, 6-8 July 2011; Volume 2, pp. 6–8. [Google Scholar]
  4. Mogharrebi, M.; Prabuwono, A.S.; Sahran, S.; Aghamohammadi, A. Missing Component Detection on PCB Using Neural Networks. In Advances in Electrical Engineering and Electrical Machines. Lecture Notes in Electrical Engineering; Zheng, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 134. [Google Scholar]
  5. Tan, J.S.; Mohd-Mokhtar, R. Neural Network for the Detection of Misplaced and Missing Regions in Images. In Proceedings of the IEEE Conference on Automatic Control and Intelligent Systems, Kota Kinabalu, Malaysia, 21 October 2017; pp. 134–139. [Google Scholar]
  6. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  7. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
  8. Lim, D.U.; Kim, Y.G.; Park, T.H. SMD classification for automated optical inspection machine using convolution neural network. In Proceedings of the IEEE International Conference Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 395–398. [Google Scholar]
  9. Li, D.; Li, C.; Chen, C.; Zhao, Z. Semantic Segmentation of a Printed Circuit Board for Component Recognition Based on Depth Images. Sensors 2020, 8, 5318. [Google Scholar] [CrossRef]
  10. Krawczyk, B. Learning from imbalanced data: Open challenges and future directions. Prog. Artif. Intell. 2016, 5, 221–232. [Google Scholar] [CrossRef]
  11. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. arXiv 2017, arXiv:1708.02002. [Google Scholar]
  12. Lai, C.W.; Zhang, L.; Tai, T.M.; Tsai, C.C.; Hwang, W.J.; Jhang, Y.J. Automated Surface Defect Inspection Based on Autoencoders and Fully Convolutional Neural Networks. Appl. Sci. 2021, 11, 7838. [Google Scholar] [CrossRef]
  13. Lin, Y.L.; Chiang, Y.M.; Hsu, H.C. Capacitor Detection in PCB Using YOLO Algorithm. In Proceedings of the IEEE International Conference System Science and Engineering, New Taipei City, Taiwan, 8–30 June 2018. [Google Scholar]
  14. Jiao, L.; Zhang, F.; Liu, F.; Yang, S.; Li, L.; Feng, Z.; Qu, R. A Survey of Deep Learning Based Object Detection. IEEE Access 2019, 7, 128837–128868. [Google Scholar] [CrossRef]
  15. Adibhatla, V.A.; Chih, H.-C.; Hsu, C.-C.; Cheng, J.; Abbod, M.F.; Shieh, J.S. Defect Detection in Printed Circuit Boards Using You-Only-Look-Once Convolutional Neural Networks. Electronics 2021, 9, 1547. [Google Scholar] [CrossRef]
  16. Li, J.; Li, W.; Chen, Y.; Gu, J. A PCB Electronic Components Detection Network Design Based on Effective Receptive Field Size and Anchor Size Matching. Comput. Intell. Neural Sci. 2021, 2021, 6682710. [Google Scholar] [CrossRef] [PubMed]
  17. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real- Time Object Detection with Region Networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
  18. Fu, C.Y.; Liu, W.; Ranga, A.; Tyagi, A.; Berg, A.C. DSSD: Deconvolutional Single Shot Detector. arXiv 2017, arXiv:1701.06659. [Google Scholar]
  19. Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  20. Kuo, C.W.; Ashmore, J.D.; Huggins, D.; Kira, Z. Data-Efficient Graph Embedding Learning for PCB Component Detection. In Proceedings of the Winter Conference Applications of Computer Vision, Waikoloa Village, HI, USA, 7–11 January 2019. [Google Scholar]
  21. Zhou, X.; Wang, D.; Krahenbuhl, P. Objects as Points. arXiv 2019, arXiv:1904.07850v1. [Google Scholar]
  22. Law, H.; Deng, J. CornerNet: Detecting Objects as Paired Keypoints. In Proceedings of the European Conference Computer Vision, Munich, Germany, 8–14 September 2018; pp. 734–750. [Google Scholar]
  23. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. CenterNet: Keypoint Triplets for Object Detection. arXiv 2019, arXiv:1904.08189. [Google Scholar]
  24. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirilov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. arXiv 2020, arXiv:2005.12872. [Google Scholar]
  25. Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; Dai, J. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In Proceedings of the International Conference Learning Representations (ICLR), Virtual Event, Austria, 3–7 May 2021. [Google Scholar]
  26. Li, F.; Zeng, A.; Liu, S.; Zhang, H.; Li, H.; Zhang, L.; Ni, L.M. Lite DETR: An Interleaved Multi-Scale Encoder for Efficient DETR. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
  27. Sohn, K. Improved Deep Metric Learning with Multi-class N-pair Loss Objective. In Advances in Neural Information Processing Systems; NeurIPS: New Orleans, LA, USA, 2016. [Google Scholar]
  28. Chen, X.; He, K. Exploring Simple Siamese Representation Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  29. Wu, Z.; Shen, C.; van den Hengel, A. Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognit. 2019, 90, 119–133. [Google Scholar] [CrossRef]
  30. Chen, C.; Tian, X.; Xiong, Z.; Wu, F. UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1069–1076. [Google Scholar]
  31. Goutte, C.; Gaussier, E. A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. Lect. Notes Comput. Sci. 2005, 3408, 345–359. [Google Scholar]
  32. Boyd, K.; Eng, K.H.; Page, C.D. Area under the Precision-Recall Curve: Point Estimates and Confidence Intervals. Lect. Notes Comput. Sci. 2013, 8190, 451–466. [Google Scholar]
  33. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  34. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo Algorithm Development. Preced. Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the proposed NN model for single-class component detection.
Figure 1. Block diagram of the proposed NN model for single-class component detection.
Applsci 13 09863 g001
Figure 2. An example showing the keypoint of a component. In the example, the screw in the image is the target component to be inspected. The centroid of the screw is the keypoint.
Figure 2. An example showing the keypoint of a component. In the example, the screw in the image is the target component to be inspected. The centroid of the screw is the keypoint.
Applsci 13 09863 g002
Figure 3. An example of the ground truth of a heat map for the inspection of capacitors on a PCB.
Figure 3. An example of the ground truth of a heat map for the inspection of capacitors on a PCB.
Applsci 13 09863 g003
Figure 4. Three network models for multiclass component placement inspection.
Figure 4. Three network models for multiclass component placement inspection.
Applsci 13 09863 g004
Figure 5. Examples of frontend and backend networks for a single class, where each Conv layer is a two-dimensional convolution layer and each Conv Trans layer is a two-dimensional transposed convolution layer. ResBlock is a residual block with a shortcut, BN denotes batch normalization, and RELU is an activation function.
Figure 5. Examples of frontend and backend networks for a single class, where each Conv layer is a two-dimensional convolution layer and each Conv Trans layer is a two-dimensional transposed convolution layer. ResBlock is a residual block with a shortcut, BN denotes batch normalization, and RELU is an activation function.
Applsci 13 09863 g005aApplsci 13 09863 g005b
Figure 6. The setup of the experiment. A high-resolution industrial camera is adopted for the acquisition of images from the PCBs.
Figure 6. The setup of the experiment. A high-resolution industrial camera is adopted for the acquisition of images from the PCBs.
Applsci 13 09863 g006
Figure 7. The component classes considered in the experiments: (a) screw, (b) capacitor, (c) mounting hole, (d) 3-pin chip, and (e) 8-pin chip.
Figure 7. The component classes considered in the experiments: (a) screw, (b) capacitor, (c) mounting hole, (d) 3-pin chip, and (e) 8-pin chip.
Applsci 13 09863 g007
Figure 8. Examples of different cropping results for the training images. All object targets to be detected on the training images are marked in the examples. Objects in green, red, yellow, blue and purple boxes are mounting holes, capacitors, screws, 3-pin chips and 8-pin chips.
Figure 8. Examples of different cropping results for the training images. All object targets to be detected on the training images are marked in the examples. Objects in green, red, yellow, blue and purple boxes are mounting holes, capacitors, screws, 3-pin chips and 8-pin chips.
Applsci 13 09863 g008
Figure 9. Test PCBs considered in this study. The test images are acquired from the PCBs.
Figure 9. Test PCBs considered in this study. The test images are acquired from the PCBs.
Applsci 13 09863 g009
Figure 10. Precision–recall curves of the five component classes considered in this study. The corresponding network model is Model 3 with a two-stage training process. (a) Screw, (b) capacitor, (c) mounting hole, (d) three-pin chip, (e) eight-pin chip.
Figure 10. Precision–recall curves of the five component classes considered in this study. The corresponding network model is Model 3 with a two-stage training process. (a) Screw, (b) capacitor, (c) mounting hole, (d) three-pin chip, (e) eight-pin chip.
Applsci 13 09863 g010
Figure 11. Examples of the inspection results for the capacitors for different PCBs.
Figure 11. Examples of the inspection results for the capacitors for different PCBs.
Applsci 13 09863 g011
Figure 12. Examples for the joint inspection of screws, capacitors, mounting holes, three-pin chips, and eight-pin chips. (a) three-pin chips and eight-pin chips; (b) three-pin chips and eight-pin chips; (c) screws and mounting holes; (d) capacitors and mounting holes; (e) three-pin chips, eight-pin chips, capacitors, and mounting holes; (f) screws, three-pin chips, eight-pin chips, capacitors, and mounting holes.
Figure 12. Examples for the joint inspection of screws, capacitors, mounting holes, three-pin chips, and eight-pin chips. (a) three-pin chips and eight-pin chips; (b) three-pin chips and eight-pin chips; (c) screws and mounting holes; (d) capacitors and mounting holes; (e) three-pin chips, eight-pin chips, capacitors, and mounting holes; (f) screws, three-pin chips, eight-pin chips, capacitors, and mounting holes.
Applsci 13 09863 g012
Table 1. The number of augmented images for the training of each component class.
Table 1. The number of augmented images for the training of each component class.
Training ProcessCapacitorScrewMounting Hole3-Pin Chip8-Pin Chip
Number of Images23072119283923311602
Table 2. The parameters of the example model containing only one frontend network and one backend network for single-class inspection. The names of layers are defined in Figure 5. The layer size and network size are the number of weights for a layer and a network, respectively.
Table 2. The parameters of the example model containing only one frontend network and one backend network for single-class inspection. The names of layers are defined in Figure 5. The layer size and network size are the number of weights for a layer and a network, respectively.
NetworkFrontend NetworkBackend Network
LayerConv 1Resblock 1Resblock 2Conv Trans 1Conv Trans 2Conv 2Conv 3Conv 4Conv 5Conv 6Conv 7
Stride Size22222121111
Kernel Size 7 × 7 3 × 3 3 × 3 3 × 3 3 × 3 3 × 3 3 × 3 3 × 3 1 × 1 3 × 3 1 × 1
Input Tensor
Dimensions
512 × 512 × 3 256 × 256 × 32 128 × 128 × 64 64 × 64 × 128 128 × 128 × 128 256 × 256 × 64 256 × 256 × 64 128 × 128 × 64 128 × 128 × 128 128 × 128 × 64 128 × 128 × 128
Layer Size473629,856119,104147,58473,79236,92836,92873,85612973,856258
Network Size452,128148,099
Table 3. The specifications of the proposed NN models for the inspection of five component classes. The model size is defined as the number of weights in the whole model.
Table 3. The specifications of the proposed NN models for the inspection of five component classes. The model size is defined as the number of weights in the whole model.
Model TypeModel 1Model 2Model 3
Input   X   Dimension 512 × 512 × 3 512 × 512 × 3 512 × 512 × 3
Output   Y   Dimension 128 × 128 128 × 128 128 × 128
Output   S   Dimension 128 × 128 × 2 128 × 128 × 2 128 × 128 × 2
Model Size600,7431,192,6231,644,751
Model Configuration1 Frontend NN
1 Backend NN
1 Frontend NN
5 Backend NN
2 Frontend NNs
5 Backend NNs
Table 4. The inspection accuracy of various component classes of Model 1 with single-stage and two-stage training processes.
Table 4. The inspection accuracy of various component classes of Model 1 with single-stage and two-stage training processes.
Training ProcessInspection AccuracyScrewMounting HoleCapacitor3-Pin Chip8-Pin Chip
Single-Stage
[21]
AP
F1
0.9460
0.9283
0.9316
0.8876
0.9391
0.8867
0.9682
0.9123
0.9665
0.9055
Two-StageAP
F1
0.9695
0.9482
0.9400
0.8912
0.9532
0.9320
0.9801
0.9429
0.9755
0.9296
Table 5. The AP values and F1 scores of various component classes for Model 1, Model 2, and Model 3. The two-stage training process is employed for the models.
Table 5. The AP values and F1 scores of various component classes for Model 1, Model 2, and Model 3. The two-stage training process is employed for the models.
Component ClassModel 1Model 2Model 3
APF1APF1APF1
Capacitor0.95320.93200.97390.93680.96050.9363
Screw0.96950.94820.97090.94530.97100.9435
3-pin Chip0.98010.94290.99300.97340.99250.9739
8-pin Chip0.97550.92960.98920.96620.99200.9760
Mounting Hole0.94000.89120.94370.94370.97230.9545
Table 6. The inspection accuracy of various component classes for various algorithms.
Table 6. The inspection accuracy of various component classes for various algorithms.
Training ProcessInspection AccuracyScrewMounting HoleCapacitor3-Pin Chip8-Pin Chip
Proposed
Model 3
AP
F1
0.9710
0.9435
0.9723
0.9545
0.9605
0.9363
0.9925
0.9739
0.9920
0.9760
Faster RCNN
[19]
AP
F1
0.9680
0.9078
0.9335
0.8755
0.9523
0.9018
0.9734
0.9363
0.9895
0.9702
DETR
[24]
AP
F1
0.9800
0.9469
0.9472
0.9341
0.9641
0.9389
0.9944
0.9735
0.9986
0.9946
SSD + MobileNet
[33]
AP
F1
0.9218
0.8425
0.8986
0.8610
0.9459
0.8930
0.9585
0.9042
0.9799
0.9833
YOLO v5
[34]
AP
F1
0.9170
0.9224
0.9490
0.9220
0.9530
0.9423
0.9840
0.9771
0.9600
0.9450
Table 7. The weight size and computation time for inference of various algorithms.
Table 7. The weight size and computation time for inference of various algorithms.
AlgorithmWeight SizeInference Latency
PCJetson Nano
Proposed Model 31,644,75121.4 ms146.9 ms
Faster RCNN [19]28,337,68256.1 msNA
DETR [24]41,487,306206.5 msNA
SSD + MobileNet [33]2,601,21246.4 ms167.6 ms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chung, S.-T.; Hwang, W.-J.; Tai, T.-M. Keypoint-Based Automated Component Placement Inspection for Printed Circuit Boards. Appl. Sci. 2023, 13, 9863. https://doi.org/10.3390/app13179863

AMA Style

Chung S-T, Hwang W-J, Tai T-M. Keypoint-Based Automated Component Placement Inspection for Printed Circuit Boards. Applied Sciences. 2023; 13(17):9863. https://doi.org/10.3390/app13179863

Chicago/Turabian Style

Chung, Si-Tung, Wen-Jyi Hwang, and Tsung-Ming Tai. 2023. "Keypoint-Based Automated Component Placement Inspection for Printed Circuit Boards" Applied Sciences 13, no. 17: 9863. https://doi.org/10.3390/app13179863

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop