Next Article in Journal
Survey on Routing Protocols for Vehicular Ad Hoc Networks Based on Multimetrics
Previous Article in Journal
Switching Loss Balancing Technique for Modular Multilevel Converters Operated by Model Predictive Control Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Execution of an ASIFT Hardware Accelerator by Prior Data Processing

1
System LSI Division, Samsung Electronics Corporation, Hwaseong 18448, Korea
2
Department of Electronic Engineering, Sun Moon University, Asan 31460, Korea
3
Department of Electrical and Computer Engineering, Seoul National University, Seoul 08826, Korea
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(10), 1176; https://doi.org/10.3390/electronics8101176
Submission received: 29 September 2019 / Accepted: 15 October 2019 / Published: 17 October 2019
(This article belongs to the Section Circuit and Signal Processing)

Abstract

:
This paper proposes a new ASIFT hardware architecture that processes a Video Graphics Array (VGA)-sized (640 × 480) video in real time. The previous ASIFT accelerator suffers from low utilization because affine transformed images are computed repeatedly. In order to improve hardware utilization, the proposed hardware architecture adopts two schemes to increase the utilization of a bottleneck hardware module. The first is a prior anti-aliasing scheme, and the second is a prior down-scaling scheme. In the proposed method, 1 × 1 and 0.5 × 1 blurred images are generated and they are reused for creating various affine transformed images. Thanks to the proposed schemes, the utilization drop by waiting for the affine transform is significantly decreased, and consequently, the operation speed is increased substantially. Experimental results show that the proposed ASIFT hardware accelerator processes a VGA-sized video at the speed of 28 frames/s, which is 1.36 times faster than that of previous work.

1. Introduction

Local features have been widely used for scene matching, which is important in many computer vision applications such as object detection, tracking, and motion estimation. For robust scene matching, scale-invariant feature transform (SIFT) proposed by Lowe [1] has been used as one of the most reliable local features because translation, rotation, and scale invariances are effectively supported. Unfortunately, the performance of SIFT is degraded when the direction of a camera view is changed. To overcome this limitation, Morel et al. proposed an affine invariant extension of SIFT (ASIFT) [2].
Since a large amount of computation is required in a SIFT algorithm, optimized hardware accelerators for SIFT have been proposed [3,4,5,6,7]. The ASIFT algorithm generates many images transformed by affine transforms in order to simulate the view change of a camera. Then, SIFT features are extracted in the simulated images. This means that the computational complexity of an ASIFT algorithm is much higher than that of a SIFT algorithm. In order to increase the processing speed of a complex ASIFT algorithm, Yum et al. [8] proposed an ASIFT hardware architecture that adopts a modified affine transform to reduce the latency of an external memory, and consequently, the operation speed of an ASIFT algorithm increases significantly. Nonetheless, this hardware accelerator processes a VGA-sized (640 × 480) video sequence at 20 frames/s (fps), which is not fast enough for real-time processing.
In order to increase the operation speed of an ASIFT hardware implementation, this paper proposes two schemes that increase the utilization of an affine transform module, which is a bottleneck of the hardware accelerator [8]. The first is a prior anti-aliasing scheme that computes a 1 × 1 blurred image and stores it in an external memory. By reusing the stored image for generating various simulated images, redundant data fetching for generating the 1 × 1 blurred image is removed. The second is a prior down-scaling scheme. A 0.5 × 1 blurred image is generated and reused for generating the simulated images of which the width is scaled less than 0.5 times. A word of the 0.5 × 1 blurred image includes more valid pixels than that of the 1 × 1 blurred image. Thus, the stall cycles to wait for valid data are decreased. As a result, the proposed ASIFT hardware implementation processes a VGA-sized video at 28 fps.

2. Previous Work

2.1. ASIFT Algorithm

An ASIFT algorithm is proposed to achieve full affine invariance such that it can find correspondences in two images representing the same scene even though they are obtained from any viewpoints [2]. In an ASIFT algorithm, simulated images for various camera viewpoints are generated by transforming a source image with affine transform matrices. Then, SIFT features are computed in the simulated images. Because these SIFT features are obtained by considering the viewpoint change, correspondences can be found between two images for which the camera viewpoints are different.
The images captured by a camera at various positions can be interpreted as affine decomposition. The camera position is represented on hemispherical coordinates as shown in Figure 1. The center (o) of the hemisphere is located at the center of a source image u. The latitude and longitude of the position of the camera are represented by θ and φ , respectively. The affine distortion caused by the change of the camera position is interpreted as the rotation and scaling of an image. The affine transform is represented by Equation (1). In this equation, image rotation and scaling are represented by a rotation matrix ( R φ ) and a scaling matrix ( T 1 , 1 / t ), respectively.
A = T 1 , 1 / t R φ = [ 1 0 0 1 / t ] [ c o s φ s i n φ s i n φ c o s φ ] ,   t = 1 c o s θ
Morel et al. proposed the proper range and sampling step of t and φ in [2]. In Equation (1), t ranges from 1 to 4 2 , and φ ranges from 0° to 180°. The sampling step of tilt (∆t) is 2 , and the sampling step of φ (∆ φ )   is 72 °/t. The number of simulated images is 42 when the sampling range and step in Reference [2] are used. When a simulated image is obtained by affine transform, the SIFT algorithm in Reference [1] is used to generate SIFT features.
When an ASIFT hardware fetches a source image from the external memory implemented by a Dynamic Random-Access Memory (DRAM), the image is fetched in a rotated manner because of the rotation matrix R φ of A in Equation (1). This means that the ASIFT accelerator accesses external memory in discontinuous order so that a burst transfer cannot be requested, which slows down DRAM access significantly.

2.2. ASIFT Hardware Accelerator

In order to increase the processing speed of an ASIFT algorithm, Yum. et al. [8] proposed a hardware implementation of an ASIFT algorithm. This hardware adopts a modified affine transform matrix B to reduce the latency of an external memory, which is given by Equation (2). Matrix B consists of a scaling matrix T s x , s y and a skewing matrix S g , but a rotation matrix is not included. Thus, the source image is fetched in a continuous order, and a burst transfer mode can be used.
B = S g T s x , s y = [ 1 g 0 1 ] [ s x 0 0 s y ] s x = 1 t t 2 c o s ( φ ) 2 + s i n ( φ ) 2 ,   s y = 1 t 2 c o s ( φ ) 2 + sin ( φ ) 2 g = tan ( τ ) = ( 1 t t ) sin ( φ ) cos ( φ )
The ASIFT hardware architecture proposed by Yum et al. [8] is shown in Figure 2. In this figure, gray blocks are internal and external buffers, and a striped block stands for a bus system connecting the ASIFT accelerator and the external memory. In order to increase the operation speed, the ASIFT hardware accelerator adopts one of the state-of-the-art architectures of a SIFT hardware accelerator proposed in Reference [7].
In order to reduce the computational load, the ASIFT hardware accelerator proposed by Yum et al. [8] increases the tilt sampling step (∆t) from 2 to 2, which further reduces the computational load to 43%. Due to this simplification, the number of viewpoints is decreased from 42 to 16.

2.3. Analysis of Hardware Utilization of Previous Work

The throughput of the affine transform module of the ASIFT hardware proposed by Yum et al. [8] is limited by the anti-alias filtering and the down-scaling operation. Figure 3 explains the cause of the slow operation. This figure shows an example in which a simulated image is generated with sx = 0.25 and sy = 0.25 in the affine transform module. In Figure 3, the gray circles represent the pixels of the source image stored in the source image buffer, and the white circles are the pixels of the source image to be fetched from the external memory. A dotted rectangular box corresponds to a kernel of anti-alias filter, and the black circles are the pixels that have been already filtered by the kernel. A vertical scaler employs nearest neighbor (NN) interpolation and provides the valid row-address to the anti-alias filter so that it can process only the required pixel lines. A horizontal scaler also performs NN interpolation, and the striped circles indicate the sampled pixels by the interpolation.
The affine transform module is the throughput bottleneck of the ASIFT operation. The first reason is the slow operation time of the anti-alias filter. The anti-alias filter is applied to a pixel at each cycle; thus, the throughput of the filter is 1 byte/cycle, which is slower than that of source image loader (2.46 bytes/cycle). The second reason is that the speed of data fetch required for the filter is not fast enough. When 1/ s y is large, pixels are filtered sparsely in the vertical direction, which means that the speed of data fetch needs to be increased. Figure 3 shows an example in which the filtering for the fifth line is completed, and the filtering for the ninth line should start. However, the 12th line data is not fetched yet. In this case, the vertical scaler waits for the required data on the 12th line. The third reason is that the ratio of the valid data in a line is low when 1/ s x is large. In Figure 3, only every fourth pixel, which is a striped circle, in the horizontal direction is selected for the down-sampled image, and the rest are discarded. Thus, the throughput of the horizontal scaler is decreased because of waiting for valid data.
In order to fully utilize the SIFT generation module, the throughput of the affine transform module needs to be improved by up to 31 bytes/cycle. As shown in Table 1, however, the affine transform module proposed by Yum et al. [8] does not satisfy this condition for ASIFT ( 2 ) ASIFT ( 15 ) . ASIFT ( i ) represents the ASIFT operation for viewpoint i . This means that the SIFT generation module stays idle waiting for data from the affine transform module.

3. Proposed Schemes and Hardware Architecture

3.1. Increasing the Throughput of Affine Transform Module

For the increase of the throughput of the affine transform module, this paper proposes a prior anti-aliasing scheme and a prior down-scaling scheme. In the prior anti-aliasing scheme, a 1 × 1 blurred image is generated and stored in the external memory when ASIFT ( 0 ) is processed. For ASIFT ( 1 ) ASIFT ( 15 ) , the ASIFT hardware reads the proper pixel lines from the 1 × 1 blurred image stored in the external memory. Figure 4a shows an example in which anti-alias filtering is not performed, but filtered data (black circle) are fetched from the external memory. Because the filtering computations for ASIFT ( 1 ) ASIFT ( 15 ) are removed, the slow operation time of filtering does not limit the throughput of the affine transform module, and the speed of data fetch does not cause a stall in the vertical scaler with large 1/ s y . The throughput of the vertical scaler is increased up to the throughput of the source image loader (2.46 bytes/cycle).
The proposed prior down-scaling scheme computes a 1/2 down-sampled image of the 1 × 1 blurred image in the horizontal direction. This down-sampled image is referred to as a 0.5 × 1 blurred image. It is stored in the external memory when ASIFT ( 0 ) is processed. The prior down-scaling scheme increases the throughput of the horizontal scaler by 2 times when sx is smaller than 0.5. Figure 4b presents an example when the prior down-scaling scheme is not adopted and s x = 0.25. In this figure, the horizontal scaler samples only the first byte of each word and transfers it to the scaled image buffer at a cycle, and consequently, the throughput is decreased to s x . Figure 4c shows an example when the scheme is adopted. In the proposed scheme, the affine transform module fetches the 0.5 × 1 blurred image, and down-samples it with a modified scaling ratio s x = s x × 2 . As a result, the throughput of the horizontal scaler is increased by 2 times.

3.2. Proposed ASIFT Hardware Architecture

In this paper, a modified ASIFT hardware architecture is proposed as shown in Figure 5. The proposed hardware architecture consists of an affine transform module and SIFT generation module. The affine transform module is a modified version of that in previous work [8], and the architecture of the SIFT generation module is the same as that in Reference [8]. In order to obtain the ASIFT features for all viewpoint indices, source mux selects the proper source data among three external buffers according to a current viewpoint index, and this operation is performed repeatedly. The first operation of the ASIFT hardware accelerator is the derivation of ASIFT ( 0 ) using the source image. For this operation, the source mux selects the external source image buffer. Because a transform matrix for ASIFT ( 0 ) is the identity matrix, the fetched source image is transferred to the SIFT generation module without any scaling operation. At the same time, it is provided to the anti-alias filter and horizontal 1/2 scaler to generate the 1 × 1 blurred image and the 0.5 × 1 blurred image, and then they are stored in an external memory. After the ASIFT ( 0 ) operation, the derivation of ASIFT features for any other viewpoint indices can be operated as there is no dependency among ASIFT ( 1 ) ASIFT ( 15 ) . The ASIFT hardware uses a proper image between the 1 × 1 blurred image and the 0.5 × 1 blurred image according to the prior horizontal scaling ratio p s x i = c e i l ( s x i 2 ) / 2 . If p s x i is 1, the 1 × 1 blurred image is selected. Otherwise, the 0.5 × 1 blurred image is used.
Table 2 shows the design specification of the proposed ASIFT hardware. The proposed hardware is synthesized with 130-nm process technology by Design Compiler DC Ultra Version L-2016.03 designed by Synopsys (California, United States). The gate count of the hardware is 505 K, the maximum operating frequency is 190 MHz, and the size of internal memory is 467.5 Kbits. The proposed hardware uses 12.6 Mbits of space of the external memory for a VGA-sized image.

4. Results and Discussion

Experiments were carried out under the same conditions as the previous work [8]. The proposed ASIFT hardware uses a Synchronous Dynamic Random-Access Memory (SDRAM) as an external memory. The initial latency of the SDRAM is 11 cycles. In the proposed hardware, the first word is received after 11 cycles, while the next data are received in every cycle. A bus system connecting the ASIFT hardware to the external memory supports a burst transfer of length 16, and a word consisting of 4 bytes is transferred for each cycle by the bus system. The size of one pixel is 1 byte.

4.1. Throughput

In order to evaluate an enhancement by the proposed schemes, Table 3 presents a comparison of the throughput of three affine transform modules. The first column represents the viewpoint index, and the second column presents the throughput of the affine transform module proposed by Yum et al. [8]. Except for viewpoints 0 and 1, the throughput of the affine transform module is less than 31 bytes/cycle, which means the SIFT generation module is not fully utilized for almost all viewpoints. The third column shows the throughput of the affine transform module with the prior anti-aliasing scheme, and the fourth column presents the results with both the prior anti-aliasing and prior down-scaling schemes. When the prior anti-aliasing scheme is adopted, the average of the throughput is increased to 30.10 bytes/cycle. When the prior down-scaling scheme is adopted additionally, the throughput of the proposed affine transform module is increased to 31 bytes/cycles on average, which means the SIFT generation module of the proposed architecture is fully utilized. For the viewpoints of index 10–12, the sx is smaller than 0.5, and the throughput of the horizontal scaler increases by using 0.5 × 1 blurred image.

4.2. Operation Speed

In order to measure the operation speed of the ASIFT hardware accelerator for a VGA-sized image, Register-Transfer Level (RTL) simulation was carried out with the operating frequency of 190MHz, which is the maximum frequency of the proposed hardware. The test images proposed by Mikolajczyk et al. [9] were used. The experimental results are shown in Table 4. The first column represents test images. The second and third columns show the number of keypoints and the operation time of the ASIFT hardware accelerator proposed in Reference [8]. As shown in Table 4, by adopting the proposed pre-processing scheme, the operation speed is increased to 1.36 times on average. The number of keypoints is not exactly the same because the scaled images computed in the previous work and by the proposed hardware accelerator are not exactly the same.

4.3. Matching Accuracy

In order to compare the matching accuracy, Figure 6 presents the matching scores of the proposed accelerator, the previous design [8], and another previous design, GPU-ASIFT, proposed by Codreanu et al. [10]. The matching score is a metric for evaluating the matching accuracy of local features [9]. The test image sets given by Morel et al. [2] were used for the experiments. In Figure 6, the matching scores of the proposed method are the same as those of the previous design [8]. The average matching score of GPU-ASIFT [10] is higher than that of the proposed architecture in most images. As the latitude and longitude increase, however, the matching score of GPU-ASIFT drops significantly and even falls to zero in certain conditions. This means that GPU-ASIFT does not maintain the characteristic of affine invariance. On the other hand, the proposed ASIFT hardware accelerator derives the correspondences in all ranges of the latitude and longitude.

5. Conclusions

This paper proposes an ASIFT hardware architecture for enhancing operation speed. By increasing the throughput of the bottleneck module, the utilizations of the ASIFT hardware modules are increased, and the throughput of the entire hardware accelerator is increased as well. The proposed prior anti-aliasing scheme reuses the anti-alias filtered image. The computation speed is improved by removing the redundant operation of anti-alias filtering. The proposed prior down-scaling scheme reuses the filtered image that is down-sampled by 1/2 in the horizontal direction. This scheme doubles the throughput of the horizontal scaler module when the width of a simulated image is scaled lower than 0.5. Thanks to the proposed methods, the throughput of the bottleneck module, which is an affine transform module, is increased up to 31 bytes/cycle, which makes the ASIFT hardware fully utilized for all viewpoints. The operation speed of the proposed accelerator is increased up to 1.36 times on average compared with previous work [8] without a degradation of matching accuracy. As a result, the proposed ASIFT hardware processes a VGA image at 28 frames/second.

Author Contributions

Conceptualization, methodology, simulation, J.Y.; data analysis, J.Y. and J.-S.K.; validation, J.Y., J.-S.K., and H.-J.L.

Funding

This work was supported by the 2018 Sun Moon University Research Grant.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lowe, D. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  2. Morel, J.-M.; Yu, G. ASIFT: A new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2009, 2, 438–469. [Google Scholar] [CrossRef]
  3. Hsu, P.H.; Tseng, Y.C.; Chang, T.S. Low memory cost bilateral filtering using stripe-based sliding integral histogram. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 3120–3123. [Google Scholar]
  4. Bonato, V.; Marques, E.; Constantinides, G.A. A parallel hardware architecture for scale and rotation invariant feature detection. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1703–1712. [Google Scholar] [CrossRef]
  5. Huang, F.C.; Huang, S.Y.; Ker, J.W.; Chen, Y.C. High-performance SIFT hardware accelerator for real-Time image feature extraction. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 340–351. [Google Scholar] [CrossRef]
  6. Kim, E.S.; Lee, H.-J. A novel hardware design for SIFT generation with reduced memory requirement. J. Semicond. Technol. 2013, 13, 157–169. [Google Scholar] [CrossRef] [Green Version]
  7. Yum, J.; Lee, C.-H.; Kim, J.-S.; Lee, H.-J. A Novel Hardware Architecture with Reduced Internal Memory for Real-time Extraction of SIFT in an HD Video. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 1943–1954. [Google Scholar] [CrossRef]
  8. Yum, J.; Lee, C.-H.; Park, J.; Kim, J.-S.; Lee, H.-J. A hardware architecture for the affine-invariant extension of SIFT. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 3251–3261. [Google Scholar] [CrossRef]
  9. Mikolajczyk, K.; Tuytelaars, T.; Schmid, C. A Comparison of Affine Region Detectors. Int. J. Comput. Vis. 2005, 65, 43–72. [Google Scholar] [CrossRef]
  10. Codreanu, V.; Dong, F.; Liu, B. GPU-ASIFT: A fast fully affine-invariant feature extraction algorithm. In Proceedings of the IEEE High Performance Computing and Simulation (HPCS), Helsinki, Finland, 1–5 July 2013; pp. 474–481. [Google Scholar]
Figure 1. An interpretation of a camera position of the affine decomposition.
Figure 1. An interpretation of a camera position of the affine decomposition.
Electronics 08 01176 g001
Figure 2. ASIFT hardware architecture proposed by Yum et al. [8].
Figure 2. ASIFT hardware architecture proposed by Yum et al. [8].
Electronics 08 01176 g002
Figure 3. An interpretation of the throughput of the affine transform module in the ASIFT hardware proposed by Yum et al. [8].
Figure 3. An interpretation of the throughput of the affine transform module in the ASIFT hardware proposed by Yum et al. [8].
Electronics 08 01176 g003
Figure 4. Interpretations of the improvement in throughput; (a) proposed prior anti-aliasing scheme; (b) 1 × 1 blurred image in the prior down-scaling scheme; (c) 0.5 × 1 blurred image in the prior down-scaling scheme.
Figure 4. Interpretations of the improvement in throughput; (a) proposed prior anti-aliasing scheme; (b) 1 × 1 blurred image in the prior down-scaling scheme; (c) 0.5 × 1 blurred image in the prior down-scaling scheme.
Electronics 08 01176 g004
Figure 5. Proposed ASIFT hardware architecture. This hardware consists of an affine transform module, which is a modified version of previous work [8], and a SIFT generation module, which is the same as that in Reference [8]. The affine transform module computes 1 × 1 and 0.5 × 1 blurred images and stores them in the external memory when the viewpoint index is 0. Then, this module reuses proper source data among three types of data in the external memory according to a viewpoint index.
Figure 5. Proposed ASIFT hardware architecture. This hardware consists of an affine transform module, which is a modified version of previous work [8], and a SIFT generation module, which is the same as that in Reference [8]. The affine transform module computes 1 × 1 and 0.5 × 1 blurred images and stores them in the external memory when the viewpoint index is 0. Then, this module reuses proper source data among three types of data in the external memory according to a viewpoint index.
Electronics 08 01176 g005
Figure 6. Comparisons of the matching score of previous works [8,9] and the proposed hardware.
Figure 6. Comparisons of the matching score of previous works [8,9] and the proposed hardware.
Electronics 08 01176 g006
Table 1. Throughput of the affine transform module proposed by Yum et al. [8].
Table 1. Throughput of the affine transform module proposed by Yum et al. [8].
Viewpoint Index t Longitude (°) s x s y Throughput (pixels/cycle)
0101.0001.00031.00
1201.0000.50031.00
22360.8610.58126.69
32720.5670.88217.58
421080.5670.88217.58
521440.8610.58126.69
6401.0000.25019.08
74180.9540.26219.07
84360.8220.30419.07
94540.6220.40219.08
104720.3900.64012.09
114900.2501.0007.75
1241080.3900.64112.09
1341260.6210.40319.10
1441440.8220.30419.07
1541620.9540.26219.07
Table 2. Implementation results of the proposed hardware architecture.
Table 2. Implementation results of the proposed hardware architecture.
Technology130 nm
Maximum operating frequency190 MHz
Gate count (except memory)505 K
Internal memory size467.5 Kbits
External memory size12.6 Mbits
Table 3. Comparison of the throughput of the three affine transform modules.
Table 3. Comparison of the throughput of the three affine transform modules.
Viewpoint IndexPrevious Work [8] (bytes/cycle)Prior Anti-Aliasing (A) (bytes/cycle)A + Prior Down-Scaling (bytes/cycle)
031.0031.0031.00
131.0031.0031.00
226.6931.0031.00
317.5831.0031.00
417.5831.0031.00
526.6931.0031.00
619.0831.0031.00
719.0731.0031.00
819.0731.0031.00
919.0831.0031.00
1012.0929.7631.00
117.7519.0831.00
1212.0929.7631.00
1319.1031.0031.00
1419.0731.0031.00
1519.0731.0031.00
Average19.7530.1031.00
Table 4. Comparison of the operation time in previous work [8] and that obtained with the proposed hardware for VGA images.
Table 4. Comparison of the operation time in previous work [8] and that obtained with the proposed hardware for VGA images.
Test ImagePrevious Work [8]Proposed Method
Number of KeypointsOperation Time (ms/frame)Number of KeypointsOperation Time (ms/frame)
graf1259846.51260533.04
bark1427757.93427846.40
boat1313349.76313137.07
tree1319149.80319937.32
leuven1223744.33222130.86
ubc1255846.11255033.22
bike1261246.22260633.35
wall1240544.78239831.98
Average287748.18287435.41

Share and Cite

MDPI and ACS Style

Yum, J.; Kim, J.-S.; Lee, H.-J. Fast Execution of an ASIFT Hardware Accelerator by Prior Data Processing. Electronics 2019, 8, 1176. https://doi.org/10.3390/electronics8101176

AMA Style

Yum J, Kim J-S, Lee H-J. Fast Execution of an ASIFT Hardware Accelerator by Prior Data Processing. Electronics. 2019; 8(10):1176. https://doi.org/10.3390/electronics8101176

Chicago/Turabian Style

Yum, Joohyuk, Jin-Sung Kim, and Hyuk-Jae Lee. 2019. "Fast Execution of an ASIFT Hardware Accelerator by Prior Data Processing" Electronics 8, no. 10: 1176. https://doi.org/10.3390/electronics8101176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop