Figure 1.
Overall flowchart of the proposed method.
Figure 1.
Overall flowchart of the proposed method.
Figure 2.
Example of banknote images captured from the (a) front side and (b) back side; (c) and (d) are the corresponding pre-processed images using MSRCR.
Figure 2.
Example of banknote images captured from the (a) front side and (b) back side; (c) and (d) are the corresponding pre-processed images using MSRCR.
Figure 3.
Architecture of YOLOv3 object detector. “Conv2D” and “UpSample2D” denote the two-dimensional convolutional layers and up-sampling layers, respectively.
Figure 3.
Architecture of YOLOv3 object detector. “Conv2D” and “UpSample2D” denote the two-dimensional convolutional layers and up-sampling layers, respectively.
Figure 4.
Structures of the CNNs used in our method: (a) ResNet-18, (b) GoogleNet, and (c) Inception-ResNet-v2. “Conv” denotes the two-dimensional convolutional layer, with filter size and the number of filters in parentheses; “MaxPool” and “AvgPool” indicate the max pooling and average pooling layers, respectively.
Figure 4.
Structures of the CNNs used in our method: (a) ResNet-18, (b) GoogleNet, and (c) Inception-ResNet-v2. “Conv” denotes the two-dimensional convolutional layer, with filter size and the number of filters in parentheses; “MaxPool” and “AvgPool” indicate the max pooling and average pooling layers, respectively.
Figure 5.
Structures of (a) the two-layer residual block in ResNet-18 and (b) the inception block in GoogleNet, respectively.
Figure 5.
Structures of (a) the two-layer residual block in ResNet-18 and (b) the inception block in GoogleNet, respectively.
Figure 6.
Structures of the residual inception blocks in the Inception-ResNet-v2 architecture: (
a) Inception-Res-A, (
b) Inception-Res-B, and (
c) Inception-Res-C in
Figure 4c.
Figure 6.
Structures of the residual inception blocks in the Inception-ResNet-v2 architecture: (
a) Inception-Res-A, (
b) Inception-Res-B, and (
c) Inception-Res-C in
Figure 4c.
Figure 7.
Structures of other blocks in Inception-ResNet-v2 architectures: (
a) Stem, (
b) Reduction-A, and (
c) Reduction-B in
Figure 4c. “s2” denotes a stride number of 2.
Figure 7.
Structures of other blocks in Inception-ResNet-v2 architectures: (
a) Stem, (
b) Reduction-A, and (
c) Reduction-B in
Figure 4c. “s2” denotes a stride number of 2.
Figure 8.
Examples of EUR banknote images in the dataset: (a) genuine, (b) scanner-reproduced, (c) smartphone-reproduced, and (d) a group photo of the three types, from upper to lower: smartphone-reproduced, scanner-reproduced, and genuine.
Figure 8.
Examples of EUR banknote images in the dataset: (a) genuine, (b) scanner-reproduced, (c) smartphone-reproduced, and (d) a group photo of the three types, from upper to lower: smartphone-reproduced, scanner-reproduced, and genuine.
Figure 9.
Examples of KRW banknote images in the dataset: (a) genuine, (b) scanner-reproduced, (c) smartphone-reproduced, and (d) a group photo of the three types, from left to right: scanner-reproduced, smartphone-reproduced, and genuine.
Figure 9.
Examples of KRW banknote images in the dataset: (a) genuine, (b) scanner-reproduced, (c) smartphone-reproduced, and (d) a group photo of the three types, from left to right: scanner-reproduced, smartphone-reproduced, and genuine.
Figure 10.
Examples of USD banknote images in the dataset: (a) genuine, (b) scanner-reproduced, (c) smartphone-reproduced, and (d) a group photo of the three types, from upper to lower: genuine, scanner-reproduced, and smartphone-reproduced.
Figure 10.
Examples of USD banknote images in the dataset: (a) genuine, (b) scanner-reproduced, (c) smartphone-reproduced, and (d) a group photo of the three types, from upper to lower: genuine, scanner-reproduced, and smartphone-reproduced.
Figure 11.
Examples of JOD banknote images in the dataset: (a) genuine, (b) fake (smartphone-reproduced).
Figure 11.
Examples of JOD banknote images in the dataset: (a) genuine, (b) fake (smartphone-reproduced).
Figure 12.
Training loss of YOLOv3 on the combined multinational dataset of the four currency types.
Figure 12.
Training loss of YOLOv3 on the combined multinational dataset of the four currency types.
Figure 13.
Training losses of CNN classifiers on the EUR dataset with various optimization methods: (a) ResNet-18, (b) GoogleNet, and (c) Inception-ResNet-v2.
Figure 13.
Training losses of CNN classifiers on the EUR dataset with various optimization methods: (a) ResNet-18, (b) GoogleNet, and (c) Inception-ResNet-v2.
Figure 14.
Training losses of CNN classifiers on the KRW dataset with various optimization methods: (a) ResNet-18, (b) GoogleNet, and (c) Inception-ResNet-v2.
Figure 14.
Training losses of CNN classifiers on the KRW dataset with various optimization methods: (a) ResNet-18, (b) GoogleNet, and (c) Inception-ResNet-v2.
Figure 15.
Training losses of CNN classifiers on the USD dataset with various optimization methods: (a) ResNet-18, (b) GoogleNet, and (c) Inception-ResNet-v2.
Figure 15.
Training losses of CNN classifiers on the USD dataset with various optimization methods: (a) ResNet-18, (b) GoogleNet, and (c) Inception-ResNet-v2.
Figure 16.
Correctly detected cases of (a) genuine KRW, (b) fake USD, and (c) genuine JOD banknotes using the models trained on the EUR dataset.
Figure 16.
Correctly detected cases of (a) genuine KRW, (b) fake USD, and (c) genuine JOD banknotes using the models trained on the EUR dataset.
Figure 17.
Correctly detected cases of (a) a genuine EUR note; (b) (from top to bottom) scanner-reproduced fake, genuine, and smartphone-reproduced fake USD notes; and (c) a fake JOD note using the models trained on the KRW dataset.
Figure 17.
Correctly detected cases of (a) a genuine EUR note; (b) (from top to bottom) scanner-reproduced fake, genuine, and smartphone-reproduced fake USD notes; and (c) a fake JOD note using the models trained on the KRW dataset.
Figure 18.
Correctly detected cases of (a) a fake EUR note; (b) (from top to bottom) genuine and two fake KRW notes; and (c) a genuine JOD note using the models trained on the USD dataset.
Figure 18.
Correctly detected cases of (a) a fake EUR note; (b) (from top to bottom) genuine and two fake KRW notes; and (c) a genuine JOD note using the models trained on the USD dataset.
Figure 19.
Error cases of (a) KRW, (b) USD, and (c) JOD banknote images using the models trained on the EUR dataset for detection.
Figure 19.
Error cases of (a) KRW, (b) USD, and (c) JOD banknote images using the models trained on the EUR dataset for detection.
Figure 20.
Error cases of (a) EUR, (b) USD, and (c) JOD banknote images using the models trained on the KRW dataset for detection.
Figure 20.
Error cases of (a) EUR, (b) USD, and (c) JOD banknote images using the models trained on the KRW dataset for detection.
Figure 21.
Error cases of (a) EUR, (b) KRW, and (c) JOD banknote images using the models trained on the USD dataset for detection.
Figure 21.
Error cases of (a) EUR, (b) KRW, and (c) JOD banknote images using the models trained on the USD dataset for detection.
Figure 22.
Visualization of CAM at Stem, Reduction-A, Reduction-B, and the last convolutional layers (
Figure 4c) of the correctly classified EUR banknotes using Inception-ResNet-v2 trained on the KRW dataset: (
a) genuine, (
b) smartphone-reproduced fake, and (
c) scanner-reproduced fake.
Figure 22.
Visualization of CAM at Stem, Reduction-A, Reduction-B, and the last convolutional layers (
Figure 4c) of the correctly classified EUR banknotes using Inception-ResNet-v2 trained on the KRW dataset: (
a) genuine, (
b) smartphone-reproduced fake, and (
c) scanner-reproduced fake.
Table 2.
Testing results of YOLOv3 on the combined multinational dataset. (Measure: mAP, unit: %).
Table 2.
Testing results of YOLOv3 on the combined multinational dataset. (Measure: mAP, unit: %).
1st Testing | 2nd Testing | 3rd Testing | 4th Testing | Average |
---|
97.856 | 100 | 99.301 | 100 | 99.289 |
Table 6.
Experimental results of training on EUR and testing on the other currency types using the proposed fusion methods on two additional CNN classifiers. (Measure: mAP, unit: %).
Table 6.
Experimental results of training on EUR and testing on the other currency types using the proposed fusion methods on two additional CNN classifiers. (Measure: mAP, unit: %).
Methods | KRW | USD | JOD |
---|
MAX | 67.141 | 78.449 | 42.544 |
MAX-ABSOLUTE | 67.814 | 77.206 | 42.536 |
MIN | 64.881 | 75.294 | 42.145 |
MIN-ABSOLUTE | 64.109 | 76.571 | 42.647 |
Weighted-SUM | 69.051 | 77.350 | 43.021 |
Weighted-PRODUCT | 67.089 | 75.711 | 44.651 |
SVM | 67.190 | 75.639 | 42.536 |
3-layer NN | 66.843 | 74.023 | 45.156 |
Table 7.
Experimental results of training on KRW and testing on the other currency types using the proposed fusion methods on two additional CNN classifiers. (Measure: mAP, unit: %).
Table 7.
Experimental results of training on KRW and testing on the other currency types using the proposed fusion methods on two additional CNN classifiers. (Measure: mAP, unit: %).
Methods | EUR | USD | JOD |
---|
MAX | 76.769 | 69.921 | 34.441 |
MAX-ABSOLUTE | 83.338 | 67.928 | 32.530 |
MIN | 86.097 | 56.378 | 31.474 |
MIN-ABSOLUTE | 79.282 | 58.089 | 33.359 |
Weighted-SUM | 86.311 | 67.620 | 28.782 |
Weighted-PRODUCT | 86.374 | 66.414 | 28.412 |
SVM | 77.152 | 67.570 | 32.029 |
3-layer NN | 76.842 | 71.687 | 25.322 |
Table 8.
Experimental results of training on USD and testing on the other currency types using the proposed fusion methods on two additional CNN classifiers. (Measure: mAP, unit: %).
Table 8.
Experimental results of training on USD and testing on the other currency types using the proposed fusion methods on two additional CNN classifiers. (Measure: mAP, unit: %).
Methods | EUR | KRW | JOD |
---|
MAX | 49.006 | 57.975 | 19.029 |
MAX-ABSOLUTE | 57.670 | 57.613 | 23.548 |
MIN | 58.424 | 56.474 | 21.774 |
MIN-ABSOLUTE | 50.060 | 56.604 | 17.639 |
Weighted-SUM | 57.670 | 57.671 | 23.548 |
Weighted-PRODUCT | 59.946 | 58.272 | 22.374 |
SVM | 55.781 | 57.298 | 19.525 |
3-layer NN | 50.504 | 55.958 | 18.448 |
Table 9.
Experimental results of training on EUR, KRW, and USD, and testing on the other currency types using the proposed fusion methods on two additional CNN classifiers. (Measure: average classification accuracy, unit: %).
Table 9.
Experimental results of training on EUR, KRW, and USD, and testing on the other currency types using the proposed fusion methods on two additional CNN classifiers. (Measure: average classification accuracy, unit: %).
Methods | Training on EUR | Training on KRW | Training on USD |
---|
KRW | USD | JOD | EUR | USD | JOD | EUR | KRW | JOD |
---|
MAX | 68.213 | 81.314 | 44.176 | 77.128 | 71.034 | 41.556 | 51.125 | 59.125 | 21.452 |
MAX-ABSOLUTE | 69.273 | 78.356 | 44.365 | 85.974 | 69.156 | 34.781 | 59.781 | 59.437 | 28.992 |
MIN | 65.992 | 76.572 | 43.657 | 87.137 | 58.165 | 31.095 | 60.128 | 55.786 | 23.124 |
MIN-ABSOLUTE | 65.674 | 77.298 | 44.129 | 79.103 | 59.457 | 32.598 | 52.251 | 57.913 | 18.973 |
Weighted-SUM | 73.324 | 77.113 | 44.981 | 86.028 | 69.135 | 29.891 | 59.135 | 58.781 | 30.342 |
Weighted-PRODUCT | 68.156 | 76.913 | 45.897 | 89.106 | 67.578 | 29.678 | 70.571 | 70.324 | 24.762 |
SVM | 68.382 | 74.893 | 44.987 | 79.532 | 68.875 | 33.991 | 57.179 | 59.114 | 21.156 |
3-layer NN | 68.114 | 75.923 | 51.621 | 77.106 | 73.986 | 27.523 | 52.198 | 56.143 | 18.334 |
Table 10.
Confusion matrices of experimental results of training on EUR, KRW, and USD, and testing on the other currency types using proposed fusion methods on two additional CNN classifiers. (Measure: average classification accuracy, unit: %).
Table 10.
Confusion matrices of experimental results of training on EUR, KRW, and USD, and testing on the other currency types using proposed fusion methods on two additional CNN classifiers. (Measure: average classification accuracy, unit: %).
Methods | Training on EUR | Training on KRW | Training on USD |
---|
KRW | USD | JOD | EUR | USD | JOD | EUR | KRW | JOD |
---|
| Predicted | Real | Fake | Real | Fake | Real | Fake | Real | Fake | Real | Fake | Real | Fake | Real | Fake | Real | Fake | Real | Fake |
---|
Actual | |
---|
Real | 73.324 | 26.676 | 81.314 | 18.686 | 51.621 | 48.379 | 89.106 | 10.894 | 73.986 | 26.014 | 41.556 | 58.444 | 70.571 | 29.429 | 70.324 | 29.676 | 30.342 | 69.658 |
Fake | 26.676 | 73.324 | 18.686 | 81.314 | 48.379 | 51.621 | 10.894 | 89.106 | 26.014 | 73.986 | 58.444 | 41.556 | 29.429 | 70.571 | 29.676 | 70.324 | 69.658 | 30.342 |